00:00:00.001 Started by upstream project "autotest-spdk-v24.05-vs-dpdk-v22.11" build number 110 00:00:00.001 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3288 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.001 Started by timer 00:00:00.001 Started by timer 00:00:00.001 Started by timer 00:00:00.001 Started by timer 00:00:00.042 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.043 The recommended git tool is: git 00:00:00.043 using credential 00000000-0000-0000-0000-000000000002 00:00:00.068 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.083 Fetching changes from the remote Git repository 00:00:00.085 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.110 Using shallow fetch with depth 1 00:00:00.110 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.110 > git --version # timeout=10 00:00:00.131 > git --version # 'git version 2.39.2' 00:00:00.132 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.146 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.146 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:05.958 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:05.968 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:05.977 Checking out Revision 6b67f5fa1cb27c9c410cb5dac6df31d28ba79422 (FETCH_HEAD) 00:00:05.977 > git config core.sparsecheckout # timeout=10 00:00:05.986 > git read-tree -mu HEAD # timeout=10 00:00:06.000 > git checkout -f 6b67f5fa1cb27c9c410cb5dac6df31d28ba79422 # timeout=5 00:00:06.015 Commit message: "doc: add chapter about running CI Vagrant images on dev-systems" 00:00:06.016 > git rev-list --no-walk 6b67f5fa1cb27c9c410cb5dac6df31d28ba79422 # timeout=10 00:00:06.121 [Pipeline] Start of Pipeline 00:00:06.135 [Pipeline] library 00:00:06.137 Loading library shm_lib@master 00:00:06.137 Library shm_lib@master is cached. Copying from home. 00:00:06.152 [Pipeline] node 00:00:06.161 Running on VM-host-WFP1 in /var/jenkins/workspace/nvme-vg-autotest 00:00:06.163 [Pipeline] { 00:00:06.172 [Pipeline] catchError 00:00:06.173 [Pipeline] { 00:00:06.185 [Pipeline] wrap 00:00:06.194 [Pipeline] { 00:00:06.202 [Pipeline] stage 00:00:06.204 [Pipeline] { (Prologue) 00:00:06.222 [Pipeline] echo 00:00:06.224 Node: VM-host-WFP1 00:00:06.230 [Pipeline] cleanWs 00:00:06.240 [WS-CLEANUP] Deleting project workspace... 00:00:06.240 [WS-CLEANUP] Deferred wipeout is used... 00:00:06.245 [WS-CLEANUP] done 00:00:06.433 [Pipeline] setCustomBuildProperty 00:00:06.503 [Pipeline] httpRequest 00:00:06.525 [Pipeline] echo 00:00:06.526 Sorcerer 10.211.164.101 is alive 00:00:06.531 [Pipeline] httpRequest 00:00:06.535 HttpMethod: GET 00:00:06.536 URL: http://10.211.164.101/packages/jbp_6b67f5fa1cb27c9c410cb5dac6df31d28ba79422.tar.gz 00:00:06.536 Sending request to url: http://10.211.164.101/packages/jbp_6b67f5fa1cb27c9c410cb5dac6df31d28ba79422.tar.gz 00:00:06.548 Response Code: HTTP/1.1 200 OK 00:00:06.549 Success: Status code 200 is in the accepted range: 200,404 00:00:06.549 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/jbp_6b67f5fa1cb27c9c410cb5dac6df31d28ba79422.tar.gz 00:00:09.030 [Pipeline] sh 00:00:09.312 + tar --no-same-owner -xf jbp_6b67f5fa1cb27c9c410cb5dac6df31d28ba79422.tar.gz 00:00:09.328 [Pipeline] httpRequest 00:00:09.354 [Pipeline] echo 00:00:09.355 Sorcerer 10.211.164.101 is alive 00:00:09.365 [Pipeline] httpRequest 00:00:09.369 HttpMethod: GET 00:00:09.370 URL: http://10.211.164.101/packages/spdk_5fa2f5086d008303c3936a88b8ec036d6970b1e3.tar.gz 00:00:09.371 Sending request to url: http://10.211.164.101/packages/spdk_5fa2f5086d008303c3936a88b8ec036d6970b1e3.tar.gz 00:00:09.391 Response Code: HTTP/1.1 200 OK 00:00:09.392 Success: Status code 200 is in the accepted range: 200,404 00:00:09.392 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/spdk_5fa2f5086d008303c3936a88b8ec036d6970b1e3.tar.gz 00:00:40.665 [Pipeline] sh 00:00:40.954 + tar --no-same-owner -xf spdk_5fa2f5086d008303c3936a88b8ec036d6970b1e3.tar.gz 00:00:43.502 [Pipeline] sh 00:00:43.785 + git -C spdk log --oneline -n5 00:00:43.785 5fa2f5086 nvme: add lock_depth for ctrlr_lock 00:00:43.785 330a4f94d nvme: check pthread_mutex_destroy() return value 00:00:43.785 7b72c3ced nvme: add nvme_ctrlr_lock 00:00:43.785 fc7a37019 nvme: always use nvme_robust_mutex_lock for ctrlr_lock 00:00:43.785 3e04ecdd1 bdev_nvme: use spdk_nvme_ctrlr_fail() on ctrlr_loss_timeout 00:00:43.804 [Pipeline] withCredentials 00:00:43.814 > git --version # timeout=10 00:00:43.825 > git --version # 'git version 2.39.2' 00:00:43.842 Masking supported pattern matches of $GIT_PASSWORD or $GIT_ASKPASS 00:00:43.845 [Pipeline] { 00:00:43.853 [Pipeline] retry 00:00:43.855 [Pipeline] { 00:00:43.872 [Pipeline] sh 00:00:44.154 + git ls-remote http://dpdk.org/git/dpdk-stable v22.11.4 00:00:44.166 [Pipeline] } 00:00:44.186 [Pipeline] // retry 00:00:44.191 [Pipeline] } 00:00:44.210 [Pipeline] // withCredentials 00:00:44.219 [Pipeline] httpRequest 00:00:44.244 [Pipeline] echo 00:00:44.246 Sorcerer 10.211.164.101 is alive 00:00:44.254 [Pipeline] httpRequest 00:00:44.258 HttpMethod: GET 00:00:44.258 URL: http://10.211.164.101/packages/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:00:44.259 Sending request to url: http://10.211.164.101/packages/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:00:44.272 Response Code: HTTP/1.1 200 OK 00:00:44.273 Success: Status code 200 is in the accepted range: 200,404 00:00:44.273 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:00:51.721 [Pipeline] sh 00:00:52.006 + tar --no-same-owner -xf dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:00:53.426 [Pipeline] sh 00:00:53.710 + git -C dpdk log --oneline -n5 00:00:53.710 caf0f5d395 version: 22.11.4 00:00:53.710 7d6f1cc05f Revert "net/iavf: fix abnormal disable HW interrupt" 00:00:53.710 dc9c799c7d vhost: fix missing spinlock unlock 00:00:53.710 4307659a90 net/mlx5: fix LACP redirection in Rx domain 00:00:53.710 6ef77f2a5e net/gve: fix RX buffer size alignment 00:00:53.736 [Pipeline] writeFile 00:00:53.753 [Pipeline] sh 00:00:54.037 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:00:54.049 [Pipeline] sh 00:00:54.332 + cat autorun-spdk.conf 00:00:54.332 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:54.332 SPDK_TEST_NVME=1 00:00:54.332 SPDK_TEST_FTL=1 00:00:54.332 SPDK_TEST_ISAL=1 00:00:54.332 SPDK_RUN_ASAN=1 00:00:54.332 SPDK_RUN_UBSAN=1 00:00:54.332 SPDK_TEST_XNVME=1 00:00:54.332 SPDK_TEST_NVME_FDP=1 00:00:54.332 SPDK_TEST_NATIVE_DPDK=v22.11.4 00:00:54.332 SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:00:54.332 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:00:54.340 RUN_NIGHTLY=1 00:00:54.342 [Pipeline] } 00:00:54.358 [Pipeline] // stage 00:00:54.373 [Pipeline] stage 00:00:54.375 [Pipeline] { (Run VM) 00:00:54.389 [Pipeline] sh 00:00:54.672 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:00:54.672 + echo 'Start stage prepare_nvme.sh' 00:00:54.672 Start stage prepare_nvme.sh 00:00:54.672 + [[ -n 2 ]] 00:00:54.672 + disk_prefix=ex2 00:00:54.672 + [[ -n /var/jenkins/workspace/nvme-vg-autotest ]] 00:00:54.672 + [[ -e /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf ]] 00:00:54.672 + source /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf 00:00:54.672 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:54.672 ++ SPDK_TEST_NVME=1 00:00:54.672 ++ SPDK_TEST_FTL=1 00:00:54.672 ++ SPDK_TEST_ISAL=1 00:00:54.672 ++ SPDK_RUN_ASAN=1 00:00:54.672 ++ SPDK_RUN_UBSAN=1 00:00:54.672 ++ SPDK_TEST_XNVME=1 00:00:54.672 ++ SPDK_TEST_NVME_FDP=1 00:00:54.672 ++ SPDK_TEST_NATIVE_DPDK=v22.11.4 00:00:54.672 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:00:54.672 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:00:54.672 ++ RUN_NIGHTLY=1 00:00:54.672 + cd /var/jenkins/workspace/nvme-vg-autotest 00:00:54.672 + nvme_files=() 00:00:54.672 + declare -A nvme_files 00:00:54.672 + backend_dir=/var/lib/libvirt/images/backends 00:00:54.672 + nvme_files['nvme.img']=5G 00:00:54.672 + nvme_files['nvme-cmb.img']=5G 00:00:54.672 + nvme_files['nvme-multi0.img']=4G 00:00:54.672 + nvme_files['nvme-multi1.img']=4G 00:00:54.672 + nvme_files['nvme-multi2.img']=4G 00:00:54.672 + nvme_files['nvme-openstack.img']=8G 00:00:54.672 + nvme_files['nvme-zns.img']=5G 00:00:54.672 + (( SPDK_TEST_NVME_PMR == 1 )) 00:00:54.672 + (( SPDK_TEST_FTL == 1 )) 00:00:54.672 + nvme_files["nvme-ftl.img"]=6G 00:00:54.672 + (( SPDK_TEST_NVME_FDP == 1 )) 00:00:54.672 + nvme_files["nvme-fdp.img"]=1G 00:00:54.672 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:00:54.672 + for nvme in "${!nvme_files[@]}" 00:00:54.672 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-multi2.img -s 4G 00:00:54.672 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:00:54.672 + for nvme in "${!nvme_files[@]}" 00:00:54.672 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-ftl.img -s 6G 00:00:54.932 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-ftl.img', fmt=raw size=6442450944 preallocation=falloc 00:00:54.932 + for nvme in "${!nvme_files[@]}" 00:00:54.932 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-cmb.img -s 5G 00:00:54.932 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:00:54.932 + for nvme in "${!nvme_files[@]}" 00:00:54.932 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-openstack.img -s 8G 00:00:54.932 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:00:54.932 + for nvme in "${!nvme_files[@]}" 00:00:54.932 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-zns.img -s 5G 00:00:54.932 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:00:54.932 + for nvme in "${!nvme_files[@]}" 00:00:54.932 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-multi1.img -s 4G 00:00:55.192 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:00:55.192 + for nvme in "${!nvme_files[@]}" 00:00:55.192 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-multi0.img -s 4G 00:00:55.451 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:00:55.451 + for nvme in "${!nvme_files[@]}" 00:00:55.451 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-fdp.img -s 1G 00:00:55.451 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-fdp.img', fmt=raw size=1073741824 preallocation=falloc 00:00:55.451 + for nvme in "${!nvme_files[@]}" 00:00:55.451 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme.img -s 5G 00:00:55.712 Formatting '/var/lib/libvirt/images/backends/ex2-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:00:55.712 ++ sudo grep -rl ex2-nvme.img /etc/libvirt/qemu 00:00:55.712 + echo 'End stage prepare_nvme.sh' 00:00:55.712 End stage prepare_nvme.sh 00:00:55.724 [Pipeline] sh 00:00:56.008 + DISTRO=fedora38 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:00:56.008 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex2-nvme-ftl.img,nvme,,,,,true -b /var/lib/libvirt/images/backends/ex2-nvme.img -b /var/lib/libvirt/images/backends/ex2-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex2-nvme-multi1.img:/var/lib/libvirt/images/backends/ex2-nvme-multi2.img -b /var/lib/libvirt/images/backends/ex2-nvme-fdp.img,nvme,,,,,,on -H -a -v -f fedora38 00:00:56.268 00:00:56.268 DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant 00:00:56.268 SPDK_DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk 00:00:56.268 VAGRANT_TARGET=/var/jenkins/workspace/nvme-vg-autotest 00:00:56.268 HELP=0 00:00:56.268 DRY_RUN=0 00:00:56.268 NVME_FILE=/var/lib/libvirt/images/backends/ex2-nvme-ftl.img,/var/lib/libvirt/images/backends/ex2-nvme.img,/var/lib/libvirt/images/backends/ex2-nvme-multi0.img,/var/lib/libvirt/images/backends/ex2-nvme-fdp.img, 00:00:56.268 NVME_DISKS_TYPE=nvme,nvme,nvme,nvme, 00:00:56.268 NVME_AUTO_CREATE=0 00:00:56.268 NVME_DISKS_NAMESPACES=,,/var/lib/libvirt/images/backends/ex2-nvme-multi1.img:/var/lib/libvirt/images/backends/ex2-nvme-multi2.img,, 00:00:56.268 NVME_CMB=,,,, 00:00:56.268 NVME_PMR=,,,, 00:00:56.268 NVME_ZNS=,,,, 00:00:56.268 NVME_MS=true,,,, 00:00:56.268 NVME_FDP=,,,on, 00:00:56.268 SPDK_VAGRANT_DISTRO=fedora38 00:00:56.268 SPDK_VAGRANT_VMCPU=10 00:00:56.268 SPDK_VAGRANT_VMRAM=12288 00:00:56.268 SPDK_VAGRANT_PROVIDER=libvirt 00:00:56.268 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:00:56.268 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:00:56.268 SPDK_OPENSTACK_NETWORK=0 00:00:56.268 VAGRANT_PACKAGE_BOX=0 00:00:56.268 VAGRANTFILE=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:00:56.268 FORCE_DISTRO=true 00:00:56.268 VAGRANT_BOX_VERSION= 00:00:56.268 EXTRA_VAGRANTFILES= 00:00:56.268 NIC_MODEL=e1000 00:00:56.268 00:00:56.268 mkdir: created directory '/var/jenkins/workspace/nvme-vg-autotest/fedora38-libvirt' 00:00:56.268 /var/jenkins/workspace/nvme-vg-autotest/fedora38-libvirt /var/jenkins/workspace/nvme-vg-autotest 00:00:58.816 Bringing machine 'default' up with 'libvirt' provider... 00:01:00.201 ==> default: Creating image (snapshot of base box volume). 00:01:00.201 ==> default: Creating domain with the following settings... 00:01:00.201 ==> default: -- Name: fedora38-38-1.6-1716830599-074-updated-1705279005_default_1721693114_014f55b93cedfe7023d8 00:01:00.201 ==> default: -- Domain type: kvm 00:01:00.201 ==> default: -- Cpus: 10 00:01:00.201 ==> default: -- Feature: acpi 00:01:00.201 ==> default: -- Feature: apic 00:01:00.201 ==> default: -- Feature: pae 00:01:00.201 ==> default: -- Memory: 12288M 00:01:00.201 ==> default: -- Memory Backing: hugepages: 00:01:00.201 ==> default: -- Management MAC: 00:01:00.201 ==> default: -- Loader: 00:01:00.201 ==> default: -- Nvram: 00:01:00.201 ==> default: -- Base box: spdk/fedora38 00:01:00.201 ==> default: -- Storage pool: default 00:01:00.201 ==> default: -- Image: /var/lib/libvirt/images/fedora38-38-1.6-1716830599-074-updated-1705279005_default_1721693114_014f55b93cedfe7023d8.img (20G) 00:01:00.201 ==> default: -- Volume Cache: default 00:01:00.201 ==> default: -- Kernel: 00:01:00.201 ==> default: -- Initrd: 00:01:00.201 ==> default: -- Graphics Type: vnc 00:01:00.201 ==> default: -- Graphics Port: -1 00:01:00.201 ==> default: -- Graphics IP: 127.0.0.1 00:01:00.201 ==> default: -- Graphics Password: Not defined 00:01:00.201 ==> default: -- Video Type: cirrus 00:01:00.201 ==> default: -- Video VRAM: 9216 00:01:00.201 ==> default: -- Sound Type: 00:01:00.201 ==> default: -- Keymap: en-us 00:01:00.201 ==> default: -- TPM Path: 00:01:00.201 ==> default: -- INPUT: type=mouse, bus=ps2 00:01:00.201 ==> default: -- Command line args: 00:01:00.201 ==> default: -> value=-device, 00:01:00.201 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:01:00.201 ==> default: -> value=-drive, 00:01:00.201 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex2-nvme-ftl.img,if=none,id=nvme-0-drive0, 00:01:00.201 ==> default: -> value=-device, 00:01:00.201 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096,ms=64, 00:01:00.201 ==> default: -> value=-device, 00:01:00.201 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:01:00.201 ==> default: -> value=-drive, 00:01:00.201 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex2-nvme.img,if=none,id=nvme-1-drive0, 00:01:00.201 ==> default: -> value=-device, 00:01:00.201 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:00.201 ==> default: -> value=-device, 00:01:00.201 ==> default: -> value=nvme,id=nvme-2,serial=12342,addr=0x12, 00:01:00.201 ==> default: -> value=-drive, 00:01:00.201 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex2-nvme-multi0.img,if=none,id=nvme-2-drive0, 00:01:00.201 ==> default: -> value=-device, 00:01:00.201 ==> default: -> value=nvme-ns,drive=nvme-2-drive0,bus=nvme-2,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:00.201 ==> default: -> value=-drive, 00:01:00.201 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex2-nvme-multi1.img,if=none,id=nvme-2-drive1, 00:01:00.201 ==> default: -> value=-device, 00:01:00.201 ==> default: -> value=nvme-ns,drive=nvme-2-drive1,bus=nvme-2,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:00.201 ==> default: -> value=-drive, 00:01:00.201 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex2-nvme-multi2.img,if=none,id=nvme-2-drive2, 00:01:00.201 ==> default: -> value=-device, 00:01:00.201 ==> default: -> value=nvme-ns,drive=nvme-2-drive2,bus=nvme-2,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:00.201 ==> default: -> value=-device, 00:01:00.201 ==> default: -> value=nvme-subsys,id=fdp-subsys3,fdp=on,fdp.runs=96M,fdp.nrg=2,fdp.nruh=8, 00:01:00.201 ==> default: -> value=-device, 00:01:00.201 ==> default: -> value=nvme,id=nvme-3,serial=12343,addr=0x13,subsys=fdp-subsys3, 00:01:00.201 ==> default: -> value=-drive, 00:01:00.201 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex2-nvme-fdp.img,if=none,id=nvme-3-drive0, 00:01:00.201 ==> default: -> value=-device, 00:01:00.201 ==> default: -> value=nvme-ns,drive=nvme-3-drive0,bus=nvme-3,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:00.460 ==> default: Creating shared folders metadata... 00:01:00.719 ==> default: Starting domain. 00:01:02.099 ==> default: Waiting for domain to get an IP address... 00:01:20.238 ==> default: Waiting for SSH to become available... 00:01:20.238 ==> default: Configuring and enabling network interfaces... 00:01:25.513 default: SSH address: 192.168.121.93:22 00:01:25.513 default: SSH username: vagrant 00:01:25.513 default: SSH auth method: private key 00:01:27.477 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:01:35.595 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/dpdk/ => /home/vagrant/spdk_repo/dpdk 00:01:42.163 ==> default: Mounting SSHFS shared folder... 00:01:43.538 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/fedora38-libvirt/output => /home/vagrant/spdk_repo/output 00:01:43.538 ==> default: Checking Mount.. 00:01:45.455 ==> default: Folder Successfully Mounted! 00:01:45.455 ==> default: Running provisioner: file... 00:01:46.398 default: ~/.gitconfig => .gitconfig 00:01:46.966 00:01:46.966 SUCCESS! 00:01:46.966 00:01:46.966 cd to /var/jenkins/workspace/nvme-vg-autotest/fedora38-libvirt and type "vagrant ssh" to use. 00:01:46.966 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:01:46.966 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvme-vg-autotest/fedora38-libvirt" to destroy all trace of vm. 00:01:46.966 00:01:46.975 [Pipeline] } 00:01:46.992 [Pipeline] // stage 00:01:47.001 [Pipeline] dir 00:01:47.001 Running in /var/jenkins/workspace/nvme-vg-autotest/fedora38-libvirt 00:01:47.003 [Pipeline] { 00:01:47.016 [Pipeline] catchError 00:01:47.018 [Pipeline] { 00:01:47.031 [Pipeline] sh 00:01:47.311 + vagrant ssh-config --host vagrant 00:01:47.311 + sed -ne /^Host/,$p 00:01:47.311 + tee ssh_conf 00:01:49.848 Host vagrant 00:01:49.848 HostName 192.168.121.93 00:01:49.848 User vagrant 00:01:49.848 Port 22 00:01:49.848 UserKnownHostsFile /dev/null 00:01:49.848 StrictHostKeyChecking no 00:01:49.848 PasswordAuthentication no 00:01:49.848 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora38/38-1.6-1716830599-074-updated-1705279005/libvirt/fedora38 00:01:49.848 IdentitiesOnly yes 00:01:49.848 LogLevel FATAL 00:01:49.848 ForwardAgent yes 00:01:49.848 ForwardX11 yes 00:01:49.848 00:01:49.862 [Pipeline] withEnv 00:01:49.864 [Pipeline] { 00:01:49.880 [Pipeline] sh 00:01:50.161 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:01:50.161 source /etc/os-release 00:01:50.161 [[ -e /image.version ]] && img=$(< /image.version) 00:01:50.161 # Minimal, systemd-like check. 00:01:50.161 if [[ -e /.dockerenv ]]; then 00:01:50.161 # Clear garbage from the node's name: 00:01:50.161 # agt-er_autotest_547-896 -> autotest_547-896 00:01:50.161 # $HOSTNAME is the actual container id 00:01:50.161 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:01:50.161 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:01:50.161 # We can assume this is a mount from a host where container is running, 00:01:50.161 # so fetch its hostname to easily identify the target swarm worker. 00:01:50.161 container="$(< /etc/hostname) ($agent)" 00:01:50.161 else 00:01:50.161 # Fallback 00:01:50.161 container=$agent 00:01:50.161 fi 00:01:50.161 fi 00:01:50.161 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:01:50.161 00:01:50.433 [Pipeline] } 00:01:50.453 [Pipeline] // withEnv 00:01:50.462 [Pipeline] setCustomBuildProperty 00:01:50.478 [Pipeline] stage 00:01:50.481 [Pipeline] { (Tests) 00:01:50.500 [Pipeline] sh 00:01:50.782 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:01:51.053 [Pipeline] sh 00:01:51.334 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:01:51.609 [Pipeline] timeout 00:01:51.610 Timeout set to expire in 40 min 00:01:51.612 [Pipeline] { 00:01:51.629 [Pipeline] sh 00:01:51.911 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:01:52.480 HEAD is now at 5fa2f5086 nvme: add lock_depth for ctrlr_lock 00:01:52.494 [Pipeline] sh 00:01:52.773 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:01:53.047 [Pipeline] sh 00:01:53.382 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:01:53.657 [Pipeline] sh 00:01:53.938 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=nvme-vg-autotest ./autoruner.sh spdk_repo 00:01:54.198 ++ readlink -f spdk_repo 00:01:54.198 + DIR_ROOT=/home/vagrant/spdk_repo 00:01:54.198 + [[ -n /home/vagrant/spdk_repo ]] 00:01:54.198 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:01:54.198 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:01:54.198 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:01:54.198 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:01:54.198 + [[ -d /home/vagrant/spdk_repo/output ]] 00:01:54.198 + [[ nvme-vg-autotest == pkgdep-* ]] 00:01:54.198 + cd /home/vagrant/spdk_repo 00:01:54.198 + source /etc/os-release 00:01:54.198 ++ NAME='Fedora Linux' 00:01:54.198 ++ VERSION='38 (Cloud Edition)' 00:01:54.198 ++ ID=fedora 00:01:54.198 ++ VERSION_ID=38 00:01:54.198 ++ VERSION_CODENAME= 00:01:54.198 ++ PLATFORM_ID=platform:f38 00:01:54.198 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:01:54.198 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:54.198 ++ LOGO=fedora-logo-icon 00:01:54.198 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:01:54.198 ++ HOME_URL=https://fedoraproject.org/ 00:01:54.198 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:01:54.198 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:54.198 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:54.198 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:54.198 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:01:54.198 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:54.198 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:01:54.198 ++ SUPPORT_END=2024-05-14 00:01:54.198 ++ VARIANT='Cloud Edition' 00:01:54.198 ++ VARIANT_ID=cloud 00:01:54.198 + uname -a 00:01:54.198 Linux fedora38-cloud-1716830599-074-updated-1705279005 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:01:54.198 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:01:54.767 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:01:55.027 Hugepages 00:01:55.027 node hugesize free / total 00:01:55.027 node0 1048576kB 0 / 0 00:01:55.027 node0 2048kB 0 / 0 00:01:55.027 00:01:55.027 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:55.027 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:01:55.027 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:01:55.027 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:01:55.027 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme2 nvme2n1 nvme2n2 nvme2n3 00:01:55.027 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:01:55.027 + rm -f /tmp/spdk-ld-path 00:01:55.027 + source autorun-spdk.conf 00:01:55.027 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:55.027 ++ SPDK_TEST_NVME=1 00:01:55.027 ++ SPDK_TEST_FTL=1 00:01:55.027 ++ SPDK_TEST_ISAL=1 00:01:55.027 ++ SPDK_RUN_ASAN=1 00:01:55.027 ++ SPDK_RUN_UBSAN=1 00:01:55.027 ++ SPDK_TEST_XNVME=1 00:01:55.027 ++ SPDK_TEST_NVME_FDP=1 00:01:55.027 ++ SPDK_TEST_NATIVE_DPDK=v22.11.4 00:01:55.027 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:01:55.027 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:55.027 ++ RUN_NIGHTLY=1 00:01:55.027 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:55.027 + [[ -n '' ]] 00:01:55.028 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:01:55.028 + for M in /var/spdk/build-*-manifest.txt 00:01:55.028 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:55.028 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:55.287 + for M in /var/spdk/build-*-manifest.txt 00:01:55.287 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:55.287 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:55.287 ++ uname 00:01:55.287 + [[ Linux == \L\i\n\u\x ]] 00:01:55.287 + sudo dmesg -T 00:01:55.287 + sudo dmesg --clear 00:01:55.287 + dmesg_pid=5887 00:01:55.287 + sudo dmesg -Tw 00:01:55.287 + [[ Fedora Linux == FreeBSD ]] 00:01:55.287 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:55.287 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:55.287 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:55.287 + [[ -x /usr/src/fio-static/fio ]] 00:01:55.287 + export FIO_BIN=/usr/src/fio-static/fio 00:01:55.287 + FIO_BIN=/usr/src/fio-static/fio 00:01:55.287 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:55.287 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:55.287 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:55.287 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:55.287 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:55.287 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:55.287 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:55.287 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:55.287 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:55.287 Test configuration: 00:01:55.287 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:55.287 SPDK_TEST_NVME=1 00:01:55.287 SPDK_TEST_FTL=1 00:01:55.287 SPDK_TEST_ISAL=1 00:01:55.287 SPDK_RUN_ASAN=1 00:01:55.287 SPDK_RUN_UBSAN=1 00:01:55.287 SPDK_TEST_XNVME=1 00:01:55.287 SPDK_TEST_NVME_FDP=1 00:01:55.287 SPDK_TEST_NATIVE_DPDK=v22.11.4 00:01:55.287 SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:01:55.287 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:55.287 RUN_NIGHTLY=1 00:06:09 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:01:55.287 00:06:09 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:55.287 00:06:09 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:55.287 00:06:09 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:55.287 00:06:09 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:55.288 00:06:09 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:55.288 00:06:09 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:55.288 00:06:09 -- paths/export.sh@5 -- $ export PATH 00:01:55.288 00:06:09 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:55.288 00:06:09 -- common/autobuild_common.sh@436 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:01:55.288 00:06:09 -- common/autobuild_common.sh@437 -- $ date +%s 00:01:55.547 00:06:09 -- common/autobuild_common.sh@437 -- $ mktemp -dt spdk_1721693169.XXXXXX 00:01:55.547 00:06:09 -- common/autobuild_common.sh@437 -- $ SPDK_WORKSPACE=/tmp/spdk_1721693169.8HqJxm 00:01:55.547 00:06:09 -- common/autobuild_common.sh@439 -- $ [[ -n '' ]] 00:01:55.547 00:06:09 -- common/autobuild_common.sh@443 -- $ '[' -n v22.11.4 ']' 00:01:55.547 00:06:09 -- common/autobuild_common.sh@444 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:01:55.547 00:06:09 -- common/autobuild_common.sh@444 -- $ scanbuild_exclude=' --exclude /home/vagrant/spdk_repo/dpdk' 00:01:55.547 00:06:09 -- common/autobuild_common.sh@450 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:01:55.547 00:06:09 -- common/autobuild_common.sh@452 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/dpdk --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:01:55.547 00:06:09 -- common/autobuild_common.sh@453 -- $ get_config_params 00:01:55.547 00:06:09 -- common/autotest_common.sh@395 -- $ xtrace_disable 00:01:55.547 00:06:09 -- common/autotest_common.sh@10 -- $ set +x 00:01:55.547 00:06:10 -- common/autobuild_common.sh@453 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --with-xnvme' 00:01:55.547 00:06:10 -- common/autobuild_common.sh@455 -- $ start_monitor_resources 00:01:55.547 00:06:10 -- pm/common@17 -- $ local monitor 00:01:55.547 00:06:10 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:55.547 00:06:10 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:55.547 00:06:10 -- pm/common@25 -- $ sleep 1 00:01:55.547 00:06:10 -- pm/common@21 -- $ date +%s 00:01:55.547 00:06:10 -- pm/common@21 -- $ date +%s 00:01:55.547 00:06:10 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1721693170 00:01:55.547 00:06:10 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1721693170 00:01:55.547 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1721693170_collect-vmstat.pm.log 00:01:55.547 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1721693170_collect-cpu-load.pm.log 00:01:56.484 00:06:11 -- common/autobuild_common.sh@456 -- $ trap stop_monitor_resources EXIT 00:01:56.484 00:06:11 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:56.484 00:06:11 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:56.484 00:06:11 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:01:56.484 00:06:11 -- spdk/autobuild.sh@16 -- $ date -u 00:01:56.484 Tue Jul 23 12:06:11 AM UTC 2024 00:01:56.484 00:06:11 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:56.484 v24.05-13-g5fa2f5086 00:01:56.484 00:06:11 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:01:56.484 00:06:11 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:01:56.484 00:06:11 -- common/autotest_common.sh@1097 -- $ '[' 3 -le 1 ']' 00:01:56.484 00:06:11 -- common/autotest_common.sh@1103 -- $ xtrace_disable 00:01:56.484 00:06:11 -- common/autotest_common.sh@10 -- $ set +x 00:01:56.484 ************************************ 00:01:56.484 START TEST asan 00:01:56.484 ************************************ 00:01:56.484 using asan 00:01:56.484 00:06:11 asan -- common/autotest_common.sh@1121 -- $ echo 'using asan' 00:01:56.484 00:01:56.484 real 0m0.001s 00:01:56.484 user 0m0.001s 00:01:56.484 sys 0m0.000s 00:01:56.484 00:06:11 asan -- common/autotest_common.sh@1122 -- $ xtrace_disable 00:01:56.484 ************************************ 00:01:56.484 END TEST asan 00:01:56.484 ************************************ 00:01:56.484 00:06:11 asan -- common/autotest_common.sh@10 -- $ set +x 00:01:56.484 00:06:11 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:56.484 00:06:11 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:56.484 00:06:11 -- common/autotest_common.sh@1097 -- $ '[' 3 -le 1 ']' 00:01:56.484 00:06:11 -- common/autotest_common.sh@1103 -- $ xtrace_disable 00:01:56.484 00:06:11 -- common/autotest_common.sh@10 -- $ set +x 00:01:56.484 ************************************ 00:01:56.484 START TEST ubsan 00:01:56.484 ************************************ 00:01:56.484 using ubsan 00:01:56.484 00:06:11 ubsan -- common/autotest_common.sh@1121 -- $ echo 'using ubsan' 00:01:56.484 00:01:56.484 real 0m0.000s 00:01:56.484 user 0m0.000s 00:01:56.484 sys 0m0.000s 00:01:56.484 ************************************ 00:01:56.484 END TEST ubsan 00:01:56.484 ************************************ 00:01:56.484 00:06:11 ubsan -- common/autotest_common.sh@1122 -- $ xtrace_disable 00:01:56.484 00:06:11 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:56.743 00:06:11 -- spdk/autobuild.sh@27 -- $ '[' -n v22.11.4 ']' 00:01:56.743 00:06:11 -- spdk/autobuild.sh@28 -- $ build_native_dpdk 00:01:56.743 00:06:11 -- common/autobuild_common.sh@429 -- $ run_test build_native_dpdk _build_native_dpdk 00:01:56.743 00:06:11 -- common/autotest_common.sh@1097 -- $ '[' 2 -le 1 ']' 00:01:56.743 00:06:11 -- common/autotest_common.sh@1103 -- $ xtrace_disable 00:01:56.743 00:06:11 -- common/autotest_common.sh@10 -- $ set +x 00:01:56.743 ************************************ 00:01:56.743 START TEST build_native_dpdk 00:01:56.743 ************************************ 00:01:56.743 00:06:11 build_native_dpdk -- common/autotest_common.sh@1121 -- $ _build_native_dpdk 00:01:56.743 00:06:11 build_native_dpdk -- common/autobuild_common.sh@48 -- $ local external_dpdk_dir 00:01:56.743 00:06:11 build_native_dpdk -- common/autobuild_common.sh@49 -- $ local external_dpdk_base_dir 00:01:56.743 00:06:11 build_native_dpdk -- common/autobuild_common.sh@50 -- $ local compiler_version 00:01:56.743 00:06:11 build_native_dpdk -- common/autobuild_common.sh@51 -- $ local compiler 00:01:56.743 00:06:11 build_native_dpdk -- common/autobuild_common.sh@52 -- $ local dpdk_kmods 00:01:56.743 00:06:11 build_native_dpdk -- common/autobuild_common.sh@53 -- $ local repo=dpdk 00:01:56.743 00:06:11 build_native_dpdk -- common/autobuild_common.sh@55 -- $ compiler=gcc 00:01:56.743 00:06:11 build_native_dpdk -- common/autobuild_common.sh@61 -- $ export CC=gcc 00:01:56.743 00:06:11 build_native_dpdk -- common/autobuild_common.sh@61 -- $ CC=gcc 00:01:56.743 00:06:11 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *clang* ]] 00:01:56.743 00:06:11 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *gcc* ]] 00:01:56.743 00:06:11 build_native_dpdk -- common/autobuild_common.sh@68 -- $ gcc -dumpversion 00:01:56.743 00:06:11 build_native_dpdk -- common/autobuild_common.sh@68 -- $ compiler_version=13 00:01:56.743 00:06:11 build_native_dpdk -- common/autobuild_common.sh@69 -- $ compiler_version=13 00:01:56.743 00:06:11 build_native_dpdk -- common/autobuild_common.sh@70 -- $ external_dpdk_dir=/home/vagrant/spdk_repo/dpdk/build 00:01:56.743 00:06:11 build_native_dpdk -- common/autobuild_common.sh@71 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:01:56.743 00:06:11 build_native_dpdk -- common/autobuild_common.sh@71 -- $ external_dpdk_base_dir=/home/vagrant/spdk_repo/dpdk 00:01:56.743 00:06:11 build_native_dpdk -- common/autobuild_common.sh@73 -- $ [[ ! -d /home/vagrant/spdk_repo/dpdk ]] 00:01:56.743 00:06:11 build_native_dpdk -- common/autobuild_common.sh@82 -- $ orgdir=/home/vagrant/spdk_repo/spdk 00:01:56.743 00:06:11 build_native_dpdk -- common/autobuild_common.sh@83 -- $ git -C /home/vagrant/spdk_repo/dpdk log --oneline -n 5 00:01:56.743 caf0f5d395 version: 22.11.4 00:01:56.743 7d6f1cc05f Revert "net/iavf: fix abnormal disable HW interrupt" 00:01:56.743 dc9c799c7d vhost: fix missing spinlock unlock 00:01:56.743 4307659a90 net/mlx5: fix LACP redirection in Rx domain 00:01:56.743 6ef77f2a5e net/gve: fix RX buffer size alignment 00:01:56.743 00:06:11 build_native_dpdk -- common/autobuild_common.sh@85 -- $ dpdk_cflags='-fPIC -g -fcommon' 00:01:56.743 00:06:11 build_native_dpdk -- common/autobuild_common.sh@86 -- $ dpdk_ldflags= 00:01:56.743 00:06:11 build_native_dpdk -- common/autobuild_common.sh@87 -- $ dpdk_ver=22.11.4 00:01:56.743 00:06:11 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ gcc == *gcc* ]] 00:01:56.743 00:06:11 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ 13 -ge 5 ]] 00:01:56.743 00:06:11 build_native_dpdk -- common/autobuild_common.sh@90 -- $ dpdk_cflags+=' -Werror' 00:01:56.743 00:06:11 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ gcc == *gcc* ]] 00:01:56.743 00:06:11 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ 13 -ge 10 ]] 00:01:56.743 00:06:11 build_native_dpdk -- common/autobuild_common.sh@94 -- $ dpdk_cflags+=' -Wno-stringop-overflow' 00:01:56.743 00:06:11 build_native_dpdk -- common/autobuild_common.sh@100 -- $ DPDK_DRIVERS=("bus" "bus/pci" "bus/vdev" "mempool/ring" "net/i40e" "net/i40e/base") 00:01:56.743 00:06:11 build_native_dpdk -- common/autobuild_common.sh@102 -- $ local mlx5_libs_added=n 00:01:56.743 00:06:11 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:01:56.743 00:06:11 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:01:56.743 00:06:11 build_native_dpdk -- common/autobuild_common.sh@139 -- $ [[ 0 -eq 1 ]] 00:01:56.743 00:06:11 build_native_dpdk -- common/autobuild_common.sh@167 -- $ cd /home/vagrant/spdk_repo/dpdk 00:01:56.743 00:06:11 build_native_dpdk -- common/autobuild_common.sh@168 -- $ uname -s 00:01:56.743 00:06:11 build_native_dpdk -- common/autobuild_common.sh@168 -- $ '[' Linux = Linux ']' 00:01:56.743 00:06:11 build_native_dpdk -- common/autobuild_common.sh@169 -- $ lt 22.11.4 21.11.0 00:01:56.744 00:06:11 build_native_dpdk -- scripts/common.sh@370 -- $ cmp_versions 22.11.4 '<' 21.11.0 00:01:56.744 00:06:11 build_native_dpdk -- scripts/common.sh@330 -- $ local ver1 ver1_l 00:01:56.744 00:06:11 build_native_dpdk -- scripts/common.sh@331 -- $ local ver2 ver2_l 00:01:56.744 00:06:11 build_native_dpdk -- scripts/common.sh@333 -- $ IFS=.-: 00:01:56.744 00:06:11 build_native_dpdk -- scripts/common.sh@333 -- $ read -ra ver1 00:01:56.744 00:06:11 build_native_dpdk -- scripts/common.sh@334 -- $ IFS=.-: 00:01:56.744 00:06:11 build_native_dpdk -- scripts/common.sh@334 -- $ read -ra ver2 00:01:56.744 00:06:11 build_native_dpdk -- scripts/common.sh@335 -- $ local 'op=<' 00:01:56.744 00:06:11 build_native_dpdk -- scripts/common.sh@337 -- $ ver1_l=3 00:01:56.744 00:06:11 build_native_dpdk -- scripts/common.sh@338 -- $ ver2_l=3 00:01:56.744 00:06:11 build_native_dpdk -- scripts/common.sh@340 -- $ local lt=0 gt=0 eq=0 v 00:01:56.744 00:06:11 build_native_dpdk -- scripts/common.sh@341 -- $ case "$op" in 00:01:56.744 00:06:11 build_native_dpdk -- scripts/common.sh@342 -- $ : 1 00:01:56.744 00:06:11 build_native_dpdk -- scripts/common.sh@361 -- $ (( v = 0 )) 00:01:56.744 00:06:11 build_native_dpdk -- scripts/common.sh@361 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:01:56.744 00:06:11 build_native_dpdk -- scripts/common.sh@362 -- $ decimal 22 00:01:56.744 00:06:11 build_native_dpdk -- scripts/common.sh@350 -- $ local d=22 00:01:56.744 00:06:11 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 22 =~ ^[0-9]+$ ]] 00:01:56.744 00:06:11 build_native_dpdk -- scripts/common.sh@352 -- $ echo 22 00:01:56.744 00:06:11 build_native_dpdk -- scripts/common.sh@362 -- $ ver1[v]=22 00:01:56.744 00:06:11 build_native_dpdk -- scripts/common.sh@363 -- $ decimal 21 00:01:56.744 00:06:11 build_native_dpdk -- scripts/common.sh@350 -- $ local d=21 00:01:56.744 00:06:11 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 21 =~ ^[0-9]+$ ]] 00:01:56.744 00:06:11 build_native_dpdk -- scripts/common.sh@352 -- $ echo 21 00:01:56.744 00:06:11 build_native_dpdk -- scripts/common.sh@363 -- $ ver2[v]=21 00:01:56.744 00:06:11 build_native_dpdk -- scripts/common.sh@364 -- $ (( ver1[v] > ver2[v] )) 00:01:56.744 00:06:11 build_native_dpdk -- scripts/common.sh@364 -- $ return 1 00:01:56.744 00:06:11 build_native_dpdk -- common/autobuild_common.sh@173 -- $ patch -p1 00:01:56.744 patching file config/rte_config.h 00:01:56.744 Hunk #1 succeeded at 60 (offset 1 line). 00:01:56.744 00:06:11 build_native_dpdk -- common/autobuild_common.sh@177 -- $ dpdk_kmods=false 00:01:56.744 00:06:11 build_native_dpdk -- common/autobuild_common.sh@178 -- $ uname -s 00:01:56.744 00:06:11 build_native_dpdk -- common/autobuild_common.sh@178 -- $ '[' Linux = FreeBSD ']' 00:01:56.744 00:06:11 build_native_dpdk -- common/autobuild_common.sh@182 -- $ printf %s, bus bus/pci bus/vdev mempool/ring net/i40e net/i40e/base 00:01:56.744 00:06:11 build_native_dpdk -- common/autobuild_common.sh@182 -- $ meson build-tmp --prefix=/home/vagrant/spdk_repo/dpdk/build --libdir lib -Denable_docs=false -Denable_kmods=false -Dtests=false -Dc_link_args= '-Dc_args=-fPIC -g -fcommon -Werror -Wno-stringop-overflow' -Dmachine=native -Denable_drivers=bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:02:02.018 The Meson build system 00:02:02.018 Version: 1.3.1 00:02:02.018 Source dir: /home/vagrant/spdk_repo/dpdk 00:02:02.018 Build dir: /home/vagrant/spdk_repo/dpdk/build-tmp 00:02:02.018 Build type: native build 00:02:02.018 Program cat found: YES (/usr/bin/cat) 00:02:02.018 Project name: DPDK 00:02:02.018 Project version: 22.11.4 00:02:02.018 C compiler for the host machine: gcc (gcc 13.2.1 "gcc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:02:02.018 C linker for the host machine: gcc ld.bfd 2.39-16 00:02:02.018 Host machine cpu family: x86_64 00:02:02.018 Host machine cpu: x86_64 00:02:02.018 Message: ## Building in Developer Mode ## 00:02:02.018 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:02.018 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/check-symbols.sh) 00:02:02.018 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/options-ibverbs-static.sh) 00:02:02.018 Program objdump found: YES (/usr/bin/objdump) 00:02:02.018 Program python3 found: YES (/usr/bin/python3) 00:02:02.018 Program cat found: YES (/usr/bin/cat) 00:02:02.018 config/meson.build:83: WARNING: The "machine" option is deprecated. Please use "cpu_instruction_set" instead. 00:02:02.018 Checking for size of "void *" : 8 00:02:02.018 Checking for size of "void *" : 8 (cached) 00:02:02.018 Library m found: YES 00:02:02.018 Library numa found: YES 00:02:02.018 Has header "numaif.h" : YES 00:02:02.018 Library fdt found: NO 00:02:02.018 Library execinfo found: NO 00:02:02.018 Has header "execinfo.h" : YES 00:02:02.018 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:02:02.018 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:02.018 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:02.018 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:02.018 Run-time dependency openssl found: YES 3.0.9 00:02:02.018 Run-time dependency libpcap found: YES 1.10.4 00:02:02.018 Has header "pcap.h" with dependency libpcap: YES 00:02:02.018 Compiler for C supports arguments -Wcast-qual: YES 00:02:02.018 Compiler for C supports arguments -Wdeprecated: YES 00:02:02.018 Compiler for C supports arguments -Wformat: YES 00:02:02.018 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:02.018 Compiler for C supports arguments -Wformat-security: NO 00:02:02.018 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:02.018 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:02.018 Compiler for C supports arguments -Wnested-externs: YES 00:02:02.018 Compiler for C supports arguments -Wold-style-definition: YES 00:02:02.018 Compiler for C supports arguments -Wpointer-arith: YES 00:02:02.018 Compiler for C supports arguments -Wsign-compare: YES 00:02:02.018 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:02.018 Compiler for C supports arguments -Wundef: YES 00:02:02.018 Compiler for C supports arguments -Wwrite-strings: YES 00:02:02.018 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:02.018 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:02.018 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:02.018 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:02.018 Compiler for C supports arguments -mavx512f: YES 00:02:02.018 Checking if "AVX512 checking" compiles: YES 00:02:02.018 Fetching value of define "__SSE4_2__" : 1 00:02:02.018 Fetching value of define "__AES__" : 1 00:02:02.018 Fetching value of define "__AVX__" : 1 00:02:02.018 Fetching value of define "__AVX2__" : 1 00:02:02.018 Fetching value of define "__AVX512BW__" : 1 00:02:02.018 Fetching value of define "__AVX512CD__" : 1 00:02:02.018 Fetching value of define "__AVX512DQ__" : 1 00:02:02.018 Fetching value of define "__AVX512F__" : 1 00:02:02.018 Fetching value of define "__AVX512VL__" : 1 00:02:02.018 Fetching value of define "__PCLMUL__" : 1 00:02:02.018 Fetching value of define "__RDRND__" : 1 00:02:02.018 Fetching value of define "__RDSEED__" : 1 00:02:02.018 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:02.018 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:02.018 Message: lib/kvargs: Defining dependency "kvargs" 00:02:02.018 Message: lib/telemetry: Defining dependency "telemetry" 00:02:02.018 Checking for function "getentropy" : YES 00:02:02.018 Message: lib/eal: Defining dependency "eal" 00:02:02.018 Message: lib/ring: Defining dependency "ring" 00:02:02.018 Message: lib/rcu: Defining dependency "rcu" 00:02:02.018 Message: lib/mempool: Defining dependency "mempool" 00:02:02.018 Message: lib/mbuf: Defining dependency "mbuf" 00:02:02.018 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:02.018 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:02.018 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:02.018 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:02.018 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:02.018 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:02:02.018 Compiler for C supports arguments -mpclmul: YES 00:02:02.018 Compiler for C supports arguments -maes: YES 00:02:02.018 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:02.018 Compiler for C supports arguments -mavx512bw: YES 00:02:02.019 Compiler for C supports arguments -mavx512dq: YES 00:02:02.019 Compiler for C supports arguments -mavx512vl: YES 00:02:02.019 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:02.019 Compiler for C supports arguments -mavx2: YES 00:02:02.019 Compiler for C supports arguments -mavx: YES 00:02:02.019 Message: lib/net: Defining dependency "net" 00:02:02.019 Message: lib/meter: Defining dependency "meter" 00:02:02.019 Message: lib/ethdev: Defining dependency "ethdev" 00:02:02.019 Message: lib/pci: Defining dependency "pci" 00:02:02.019 Message: lib/cmdline: Defining dependency "cmdline" 00:02:02.019 Message: lib/metrics: Defining dependency "metrics" 00:02:02.019 Message: lib/hash: Defining dependency "hash" 00:02:02.019 Message: lib/timer: Defining dependency "timer" 00:02:02.019 Fetching value of define "__AVX2__" : 1 (cached) 00:02:02.019 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:02.019 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:02.019 Fetching value of define "__AVX512CD__" : 1 (cached) 00:02:02.019 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:02.019 Message: lib/acl: Defining dependency "acl" 00:02:02.019 Message: lib/bbdev: Defining dependency "bbdev" 00:02:02.019 Message: lib/bitratestats: Defining dependency "bitratestats" 00:02:02.019 Run-time dependency libelf found: YES 0.190 00:02:02.019 Message: lib/bpf: Defining dependency "bpf" 00:02:02.019 Message: lib/cfgfile: Defining dependency "cfgfile" 00:02:02.019 Message: lib/compressdev: Defining dependency "compressdev" 00:02:02.019 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:02.019 Message: lib/distributor: Defining dependency "distributor" 00:02:02.019 Message: lib/efd: Defining dependency "efd" 00:02:02.019 Message: lib/eventdev: Defining dependency "eventdev" 00:02:02.019 Message: lib/gpudev: Defining dependency "gpudev" 00:02:02.019 Message: lib/gro: Defining dependency "gro" 00:02:02.019 Message: lib/gso: Defining dependency "gso" 00:02:02.019 Message: lib/ip_frag: Defining dependency "ip_frag" 00:02:02.019 Message: lib/jobstats: Defining dependency "jobstats" 00:02:02.019 Message: lib/latencystats: Defining dependency "latencystats" 00:02:02.019 Message: lib/lpm: Defining dependency "lpm" 00:02:02.019 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:02.019 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:02.019 Fetching value of define "__AVX512IFMA__" : (undefined) 00:02:02.019 Compiler for C supports arguments -mavx512f -mavx512dq -mavx512ifma: YES 00:02:02.019 Message: lib/member: Defining dependency "member" 00:02:02.019 Message: lib/pcapng: Defining dependency "pcapng" 00:02:02.019 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:02.019 Message: lib/power: Defining dependency "power" 00:02:02.019 Message: lib/rawdev: Defining dependency "rawdev" 00:02:02.019 Message: lib/regexdev: Defining dependency "regexdev" 00:02:02.019 Message: lib/dmadev: Defining dependency "dmadev" 00:02:02.019 Message: lib/rib: Defining dependency "rib" 00:02:02.019 Message: lib/reorder: Defining dependency "reorder" 00:02:02.019 Message: lib/sched: Defining dependency "sched" 00:02:02.019 Message: lib/security: Defining dependency "security" 00:02:02.019 Message: lib/stack: Defining dependency "stack" 00:02:02.019 Has header "linux/userfaultfd.h" : YES 00:02:02.019 Message: lib/vhost: Defining dependency "vhost" 00:02:02.019 Message: lib/ipsec: Defining dependency "ipsec" 00:02:02.019 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:02.019 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:02.019 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:02.019 Message: lib/fib: Defining dependency "fib" 00:02:02.019 Message: lib/port: Defining dependency "port" 00:02:02.019 Message: lib/pdump: Defining dependency "pdump" 00:02:02.019 Message: lib/table: Defining dependency "table" 00:02:02.019 Message: lib/pipeline: Defining dependency "pipeline" 00:02:02.019 Message: lib/graph: Defining dependency "graph" 00:02:02.019 Message: lib/node: Defining dependency "node" 00:02:02.019 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:02.019 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:02.019 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:02.019 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:02.019 Compiler for C supports arguments -Wno-sign-compare: YES 00:02:02.019 Compiler for C supports arguments -Wno-unused-value: YES 00:02:02.019 Compiler for C supports arguments -Wno-format: YES 00:02:02.019 Compiler for C supports arguments -Wno-format-security: YES 00:02:02.019 Compiler for C supports arguments -Wno-format-nonliteral: YES 00:02:02.019 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:02:03.397 Compiler for C supports arguments -Wno-unused-but-set-variable: YES 00:02:03.397 Compiler for C supports arguments -Wno-unused-parameter: YES 00:02:03.397 Fetching value of define "__AVX2__" : 1 (cached) 00:02:03.397 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:03.397 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:03.397 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:03.397 Compiler for C supports arguments -mavx512bw: YES (cached) 00:02:03.397 Compiler for C supports arguments -march=skylake-avx512: YES 00:02:03.397 Message: drivers/net/i40e: Defining dependency "net_i40e" 00:02:03.397 Program doxygen found: YES (/usr/bin/doxygen) 00:02:03.397 Configuring doxy-api.conf using configuration 00:02:03.397 Program sphinx-build found: NO 00:02:03.397 Configuring rte_build_config.h using configuration 00:02:03.397 Message: 00:02:03.397 ================= 00:02:03.397 Applications Enabled 00:02:03.397 ================= 00:02:03.397 00:02:03.397 apps: 00:02:03.397 dumpcap, pdump, proc-info, test-acl, test-bbdev, test-cmdline, test-compress-perf, test-crypto-perf, 00:02:03.397 test-eventdev, test-fib, test-flow-perf, test-gpudev, test-pipeline, test-pmd, test-regex, test-sad, 00:02:03.397 test-security-perf, 00:02:03.397 00:02:03.397 Message: 00:02:03.397 ================= 00:02:03.397 Libraries Enabled 00:02:03.397 ================= 00:02:03.397 00:02:03.397 libs: 00:02:03.397 kvargs, telemetry, eal, ring, rcu, mempool, mbuf, net, 00:02:03.397 meter, ethdev, pci, cmdline, metrics, hash, timer, acl, 00:02:03.397 bbdev, bitratestats, bpf, cfgfile, compressdev, cryptodev, distributor, efd, 00:02:03.397 eventdev, gpudev, gro, gso, ip_frag, jobstats, latencystats, lpm, 00:02:03.397 member, pcapng, power, rawdev, regexdev, dmadev, rib, reorder, 00:02:03.397 sched, security, stack, vhost, ipsec, fib, port, pdump, 00:02:03.397 table, pipeline, graph, node, 00:02:03.397 00:02:03.397 Message: 00:02:03.397 =============== 00:02:03.397 Drivers Enabled 00:02:03.397 =============== 00:02:03.397 00:02:03.397 common: 00:02:03.397 00:02:03.397 bus: 00:02:03.397 pci, vdev, 00:02:03.397 mempool: 00:02:03.397 ring, 00:02:03.397 dma: 00:02:03.397 00:02:03.397 net: 00:02:03.397 i40e, 00:02:03.397 raw: 00:02:03.397 00:02:03.397 crypto: 00:02:03.397 00:02:03.397 compress: 00:02:03.397 00:02:03.397 regex: 00:02:03.397 00:02:03.397 vdpa: 00:02:03.397 00:02:03.397 event: 00:02:03.397 00:02:03.397 baseband: 00:02:03.397 00:02:03.397 gpu: 00:02:03.397 00:02:03.397 00:02:03.397 Message: 00:02:03.397 ================= 00:02:03.397 Content Skipped 00:02:03.397 ================= 00:02:03.397 00:02:03.397 apps: 00:02:03.397 00:02:03.397 libs: 00:02:03.397 kni: explicitly disabled via build config (deprecated lib) 00:02:03.397 flow_classify: explicitly disabled via build config (deprecated lib) 00:02:03.397 00:02:03.397 drivers: 00:02:03.397 common/cpt: not in enabled drivers build config 00:02:03.397 common/dpaax: not in enabled drivers build config 00:02:03.397 common/iavf: not in enabled drivers build config 00:02:03.397 common/idpf: not in enabled drivers build config 00:02:03.397 common/mvep: not in enabled drivers build config 00:02:03.397 common/octeontx: not in enabled drivers build config 00:02:03.397 bus/auxiliary: not in enabled drivers build config 00:02:03.397 bus/dpaa: not in enabled drivers build config 00:02:03.397 bus/fslmc: not in enabled drivers build config 00:02:03.397 bus/ifpga: not in enabled drivers build config 00:02:03.397 bus/vmbus: not in enabled drivers build config 00:02:03.397 common/cnxk: not in enabled drivers build config 00:02:03.397 common/mlx5: not in enabled drivers build config 00:02:03.397 common/qat: not in enabled drivers build config 00:02:03.397 common/sfc_efx: not in enabled drivers build config 00:02:03.397 mempool/bucket: not in enabled drivers build config 00:02:03.397 mempool/cnxk: not in enabled drivers build config 00:02:03.397 mempool/dpaa: not in enabled drivers build config 00:02:03.397 mempool/dpaa2: not in enabled drivers build config 00:02:03.397 mempool/octeontx: not in enabled drivers build config 00:02:03.397 mempool/stack: not in enabled drivers build config 00:02:03.397 dma/cnxk: not in enabled drivers build config 00:02:03.397 dma/dpaa: not in enabled drivers build config 00:02:03.397 dma/dpaa2: not in enabled drivers build config 00:02:03.397 dma/hisilicon: not in enabled drivers build config 00:02:03.397 dma/idxd: not in enabled drivers build config 00:02:03.397 dma/ioat: not in enabled drivers build config 00:02:03.397 dma/skeleton: not in enabled drivers build config 00:02:03.397 net/af_packet: not in enabled drivers build config 00:02:03.397 net/af_xdp: not in enabled drivers build config 00:02:03.397 net/ark: not in enabled drivers build config 00:02:03.397 net/atlantic: not in enabled drivers build config 00:02:03.397 net/avp: not in enabled drivers build config 00:02:03.397 net/axgbe: not in enabled drivers build config 00:02:03.397 net/bnx2x: not in enabled drivers build config 00:02:03.397 net/bnxt: not in enabled drivers build config 00:02:03.397 net/bonding: not in enabled drivers build config 00:02:03.397 net/cnxk: not in enabled drivers build config 00:02:03.397 net/cxgbe: not in enabled drivers build config 00:02:03.397 net/dpaa: not in enabled drivers build config 00:02:03.397 net/dpaa2: not in enabled drivers build config 00:02:03.397 net/e1000: not in enabled drivers build config 00:02:03.397 net/ena: not in enabled drivers build config 00:02:03.397 net/enetc: not in enabled drivers build config 00:02:03.397 net/enetfec: not in enabled drivers build config 00:02:03.397 net/enic: not in enabled drivers build config 00:02:03.397 net/failsafe: not in enabled drivers build config 00:02:03.397 net/fm10k: not in enabled drivers build config 00:02:03.397 net/gve: not in enabled drivers build config 00:02:03.397 net/hinic: not in enabled drivers build config 00:02:03.397 net/hns3: not in enabled drivers build config 00:02:03.397 net/iavf: not in enabled drivers build config 00:02:03.397 net/ice: not in enabled drivers build config 00:02:03.397 net/idpf: not in enabled drivers build config 00:02:03.397 net/igc: not in enabled drivers build config 00:02:03.397 net/ionic: not in enabled drivers build config 00:02:03.397 net/ipn3ke: not in enabled drivers build config 00:02:03.397 net/ixgbe: not in enabled drivers build config 00:02:03.397 net/kni: not in enabled drivers build config 00:02:03.397 net/liquidio: not in enabled drivers build config 00:02:03.397 net/mana: not in enabled drivers build config 00:02:03.397 net/memif: not in enabled drivers build config 00:02:03.397 net/mlx4: not in enabled drivers build config 00:02:03.397 net/mlx5: not in enabled drivers build config 00:02:03.397 net/mvneta: not in enabled drivers build config 00:02:03.397 net/mvpp2: not in enabled drivers build config 00:02:03.397 net/netvsc: not in enabled drivers build config 00:02:03.397 net/nfb: not in enabled drivers build config 00:02:03.397 net/nfp: not in enabled drivers build config 00:02:03.397 net/ngbe: not in enabled drivers build config 00:02:03.397 net/null: not in enabled drivers build config 00:02:03.397 net/octeontx: not in enabled drivers build config 00:02:03.397 net/octeon_ep: not in enabled drivers build config 00:02:03.398 net/pcap: not in enabled drivers build config 00:02:03.398 net/pfe: not in enabled drivers build config 00:02:03.398 net/qede: not in enabled drivers build config 00:02:03.398 net/ring: not in enabled drivers build config 00:02:03.398 net/sfc: not in enabled drivers build config 00:02:03.398 net/softnic: not in enabled drivers build config 00:02:03.398 net/tap: not in enabled drivers build config 00:02:03.398 net/thunderx: not in enabled drivers build config 00:02:03.398 net/txgbe: not in enabled drivers build config 00:02:03.398 net/vdev_netvsc: not in enabled drivers build config 00:02:03.398 net/vhost: not in enabled drivers build config 00:02:03.398 net/virtio: not in enabled drivers build config 00:02:03.398 net/vmxnet3: not in enabled drivers build config 00:02:03.398 raw/cnxk_bphy: not in enabled drivers build config 00:02:03.398 raw/cnxk_gpio: not in enabled drivers build config 00:02:03.398 raw/dpaa2_cmdif: not in enabled drivers build config 00:02:03.398 raw/ifpga: not in enabled drivers build config 00:02:03.398 raw/ntb: not in enabled drivers build config 00:02:03.398 raw/skeleton: not in enabled drivers build config 00:02:03.398 crypto/armv8: not in enabled drivers build config 00:02:03.398 crypto/bcmfs: not in enabled drivers build config 00:02:03.398 crypto/caam_jr: not in enabled drivers build config 00:02:03.398 crypto/ccp: not in enabled drivers build config 00:02:03.398 crypto/cnxk: not in enabled drivers build config 00:02:03.398 crypto/dpaa_sec: not in enabled drivers build config 00:02:03.398 crypto/dpaa2_sec: not in enabled drivers build config 00:02:03.398 crypto/ipsec_mb: not in enabled drivers build config 00:02:03.398 crypto/mlx5: not in enabled drivers build config 00:02:03.398 crypto/mvsam: not in enabled drivers build config 00:02:03.398 crypto/nitrox: not in enabled drivers build config 00:02:03.398 crypto/null: not in enabled drivers build config 00:02:03.398 crypto/octeontx: not in enabled drivers build config 00:02:03.398 crypto/openssl: not in enabled drivers build config 00:02:03.398 crypto/scheduler: not in enabled drivers build config 00:02:03.398 crypto/uadk: not in enabled drivers build config 00:02:03.398 crypto/virtio: not in enabled drivers build config 00:02:03.398 compress/isal: not in enabled drivers build config 00:02:03.398 compress/mlx5: not in enabled drivers build config 00:02:03.398 compress/octeontx: not in enabled drivers build config 00:02:03.398 compress/zlib: not in enabled drivers build config 00:02:03.398 regex/mlx5: not in enabled drivers build config 00:02:03.398 regex/cn9k: not in enabled drivers build config 00:02:03.398 vdpa/ifc: not in enabled drivers build config 00:02:03.398 vdpa/mlx5: not in enabled drivers build config 00:02:03.398 vdpa/sfc: not in enabled drivers build config 00:02:03.398 event/cnxk: not in enabled drivers build config 00:02:03.398 event/dlb2: not in enabled drivers build config 00:02:03.398 event/dpaa: not in enabled drivers build config 00:02:03.398 event/dpaa2: not in enabled drivers build config 00:02:03.398 event/dsw: not in enabled drivers build config 00:02:03.398 event/opdl: not in enabled drivers build config 00:02:03.398 event/skeleton: not in enabled drivers build config 00:02:03.398 event/sw: not in enabled drivers build config 00:02:03.398 event/octeontx: not in enabled drivers build config 00:02:03.398 baseband/acc: not in enabled drivers build config 00:02:03.398 baseband/fpga_5gnr_fec: not in enabled drivers build config 00:02:03.398 baseband/fpga_lte_fec: not in enabled drivers build config 00:02:03.398 baseband/la12xx: not in enabled drivers build config 00:02:03.398 baseband/null: not in enabled drivers build config 00:02:03.398 baseband/turbo_sw: not in enabled drivers build config 00:02:03.398 gpu/cuda: not in enabled drivers build config 00:02:03.398 00:02:03.398 00:02:03.398 Build targets in project: 311 00:02:03.398 00:02:03.398 DPDK 22.11.4 00:02:03.398 00:02:03.398 User defined options 00:02:03.398 libdir : lib 00:02:03.398 prefix : /home/vagrant/spdk_repo/dpdk/build 00:02:03.398 c_args : -fPIC -g -fcommon -Werror -Wno-stringop-overflow 00:02:03.398 c_link_args : 00:02:03.398 enable_docs : false 00:02:03.398 enable_drivers: bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:02:03.398 enable_kmods : false 00:02:03.398 machine : native 00:02:03.398 tests : false 00:02:03.398 00:02:03.398 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:03.398 WARNING: Running the setup command as `meson [options]` instead of `meson setup [options]` is ambiguous and deprecated. 00:02:03.398 00:06:17 build_native_dpdk -- common/autobuild_common.sh@186 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 00:02:03.398 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:02:03.398 [1/740] Generating lib/rte_telemetry_def with a custom command 00:02:03.398 [2/740] Generating lib/rte_kvargs_def with a custom command 00:02:03.398 [3/740] Generating lib/rte_kvargs_mingw with a custom command 00:02:03.398 [4/740] Generating lib/rte_telemetry_mingw with a custom command 00:02:03.398 [5/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:03.398 [6/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:03.398 [7/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:03.398 [8/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:03.398 [9/740] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:03.398 [10/740] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:03.398 [11/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:03.398 [12/740] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:03.398 [13/740] Linking static target lib/librte_kvargs.a 00:02:03.398 [14/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:03.657 [15/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:03.657 [16/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:03.657 [17/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:03.657 [18/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:03.657 [19/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:03.657 [20/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:03.657 [21/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_log.c.o 00:02:03.657 [22/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:03.657 [23/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:03.657 [24/740] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:03.657 [25/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:03.657 [26/740] Linking target lib/librte_kvargs.so.23.0 00:02:03.657 [27/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:03.915 [28/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:03.915 [29/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:03.915 [30/740] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:03.915 [31/740] Linking static target lib/librte_telemetry.a 00:02:03.915 [32/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:03.915 [33/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:03.915 [34/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:03.915 [35/740] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:03.915 [36/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:03.915 [37/740] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:03.915 [38/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:03.915 [39/740] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:03.915 [40/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:04.174 [41/740] Generating symbol file lib/librte_kvargs.so.23.0.p/librte_kvargs.so.23.0.symbols 00:02:04.174 [42/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:04.174 [43/740] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:04.174 [44/740] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:04.174 [45/740] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:04.174 [46/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:04.174 [47/740] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:04.174 [48/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:04.174 [49/740] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:04.174 [50/740] Linking target lib/librte_telemetry.so.23.0 00:02:04.174 [51/740] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:04.174 [52/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:04.174 [53/740] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:04.432 [54/740] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:04.432 [55/740] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:04.432 [56/740] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:04.432 [57/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:04.432 [58/740] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:04.432 [59/740] Generating symbol file lib/librte_telemetry.so.23.0.p/librte_telemetry.so.23.0.symbols 00:02:04.432 [60/740] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:04.432 [61/740] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:04.432 [62/740] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:04.432 [63/740] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:04.432 [64/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:04.432 [65/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_log.c.o 00:02:04.432 [66/740] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:04.432 [67/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:04.432 [68/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:04.432 [69/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:04.432 [70/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:04.432 [71/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:04.691 [72/740] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:04.691 [73/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:04.691 [74/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:04.691 [75/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:04.691 [76/740] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:04.691 [77/740] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:04.691 [78/740] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:04.691 [79/740] Generating lib/rte_eal_def with a custom command 00:02:04.691 [80/740] Generating lib/rte_eal_mingw with a custom command 00:02:04.691 [81/740] Generating lib/rte_ring_def with a custom command 00:02:04.691 [82/740] Generating lib/rte_ring_mingw with a custom command 00:02:04.691 [83/740] Generating lib/rte_rcu_def with a custom command 00:02:04.691 [84/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:04.691 [85/740] Generating lib/rte_rcu_mingw with a custom command 00:02:04.691 [86/740] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:04.691 [87/740] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:04.691 [88/740] Linking static target lib/librte_ring.a 00:02:04.691 [89/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:04.691 [90/740] Generating lib/rte_mempool_def with a custom command 00:02:04.950 [91/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:04.950 [92/740] Generating lib/rte_mempool_mingw with a custom command 00:02:04.950 [93/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:04.950 [94/740] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:04.950 [95/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:04.950 [96/740] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:04.950 [97/740] Generating lib/rte_mbuf_def with a custom command 00:02:04.950 [98/740] Linking static target lib/librte_eal.a 00:02:04.950 [99/740] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:05.209 [100/740] Generating lib/rte_mbuf_mingw with a custom command 00:02:05.209 [101/740] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:05.209 [102/740] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:05.209 [103/740] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:05.209 [104/740] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:05.209 [105/740] Linking static target lib/librte_rcu.a 00:02:05.468 [106/740] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:05.468 [107/740] Linking static target lib/librte_mempool.a 00:02:05.468 [108/740] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:05.468 [109/740] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:05.468 [110/740] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:05.468 [111/740] Generating lib/rte_net_def with a custom command 00:02:05.468 [112/740] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:05.468 [113/740] Generating lib/rte_net_mingw with a custom command 00:02:05.468 [114/740] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:05.468 [115/740] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:05.468 [116/740] Generating lib/rte_meter_mingw with a custom command 00:02:05.468 [117/740] Generating lib/rte_meter_def with a custom command 00:02:05.727 [118/740] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:05.727 [119/740] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:05.727 [120/740] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:05.727 [121/740] Linking static target lib/librte_meter.a 00:02:05.727 [122/740] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:05.727 [123/740] Linking static target lib/librte_net.a 00:02:05.985 [124/740] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:05.985 [125/740] Linking static target lib/librte_mbuf.a 00:02:05.985 [126/740] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:05.985 [127/740] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:05.985 [128/740] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:05.985 [129/740] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:05.985 [130/740] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:05.985 [131/740] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:05.985 [132/740] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:05.985 [133/740] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:06.243 [134/740] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:06.244 [135/740] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:06.502 [136/740] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:06.502 [137/740] Generating lib/rte_ethdev_def with a custom command 00:02:06.502 [138/740] Generating lib/rte_ethdev_mingw with a custom command 00:02:06.502 [139/740] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:06.503 [140/740] Generating lib/rte_pci_def with a custom command 00:02:06.503 [141/740] Generating lib/rte_pci_mingw with a custom command 00:02:06.503 [142/740] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:06.503 [143/740] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:06.503 [144/740] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:06.503 [145/740] Linking static target lib/librte_pci.a 00:02:06.763 [146/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:06.763 [147/740] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:06.763 [148/740] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:06.763 [149/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:06.763 [150/740] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:06.763 [151/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:06.763 [152/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:06.763 [153/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:06.763 [154/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:06.763 [155/740] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:07.026 [156/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:07.026 [157/740] Generating lib/rte_cmdline_def with a custom command 00:02:07.026 [158/740] Generating lib/rte_cmdline_mingw with a custom command 00:02:07.026 [159/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:07.026 [160/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:07.026 [161/740] Generating lib/rte_metrics_def with a custom command 00:02:07.026 [162/740] Generating lib/rte_metrics_mingw with a custom command 00:02:07.026 [163/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:07.026 [164/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:07.026 [165/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:07.026 [166/740] Generating lib/rte_hash_def with a custom command 00:02:07.026 [167/740] Linking static target lib/librte_cmdline.a 00:02:07.026 [168/740] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics.c.o 00:02:07.026 [169/740] Generating lib/rte_hash_mingw with a custom command 00:02:07.026 [170/740] Generating lib/rte_timer_def with a custom command 00:02:07.026 [171/740] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:07.026 [172/740] Generating lib/rte_timer_mingw with a custom command 00:02:07.285 [173/740] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:07.285 [174/740] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics_telemetry.c.o 00:02:07.285 [175/740] Linking static target lib/librte_metrics.a 00:02:07.285 [176/740] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:07.285 [177/740] Linking static target lib/librte_timer.a 00:02:07.544 [178/740] Compiling C object lib/librte_acl.a.p/acl_acl_gen.c.o 00:02:07.544 [179/740] Generating lib/metrics.sym_chk with a custom command (wrapped by meson to capture output) 00:02:07.803 [180/740] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:07.803 [181/740] Compiling C object lib/librte_acl.a.p/acl_acl_run_scalar.c.o 00:02:07.803 [182/740] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:07.803 [183/740] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:07.803 [184/740] Generating lib/rte_acl_def with a custom command 00:02:08.061 [185/740] Generating lib/rte_acl_mingw with a custom command 00:02:08.061 [186/740] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:08.061 [187/740] Linking static target lib/librte_ethdev.a 00:02:08.061 [188/740] Compiling C object lib/librte_acl.a.p/acl_tb_mem.c.o 00:02:08.061 [189/740] Generating lib/rte_bbdev_def with a custom command 00:02:08.061 [190/740] Generating lib/rte_bbdev_mingw with a custom command 00:02:08.061 [191/740] Compiling C object lib/librte_acl.a.p/acl_rte_acl.c.o 00:02:08.061 [192/740] Generating lib/rte_bitratestats_def with a custom command 00:02:08.061 [193/740] Generating lib/rte_bitratestats_mingw with a custom command 00:02:08.320 [194/740] Compiling C object lib/librte_bitratestats.a.p/bitratestats_rte_bitrate.c.o 00:02:08.320 [195/740] Linking static target lib/librte_bitratestats.a 00:02:08.320 [196/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf.c.o 00:02:08.579 [197/740] Compiling C object lib/librte_acl.a.p/acl_acl_bld.c.o 00:02:08.579 [198/740] Compiling C object lib/librte_bbdev.a.p/bbdev_rte_bbdev.c.o 00:02:08.579 [199/740] Linking static target lib/librte_bbdev.a 00:02:08.579 [200/740] Generating lib/bitratestats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:08.837 [201/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_dump.c.o 00:02:09.095 [202/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load.c.o 00:02:09.095 [203/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_exec.c.o 00:02:09.095 [204/740] Compiling C object lib/librte_acl.a.p/acl_acl_run_sse.c.o 00:02:09.095 [205/740] Generating lib/bbdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:09.354 [206/740] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:09.354 [207/740] Linking static target lib/librte_hash.a 00:02:09.354 [208/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_stub.c.o 00:02:09.613 [209/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_pkt.c.o 00:02:09.613 [210/740] Generating lib/rte_bpf_def with a custom command 00:02:09.613 [211/740] Generating lib/rte_bpf_mingw with a custom command 00:02:09.613 [212/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load_elf.c.o 00:02:09.613 [213/740] Generating lib/rte_cfgfile_def with a custom command 00:02:09.613 [214/740] Generating lib/rte_cfgfile_mingw with a custom command 00:02:09.613 [215/740] Compiling C object lib/librte_acl.a.p/acl_acl_run_avx2.c.o 00:02:09.872 [216/740] Compiling C object lib/librte_cfgfile.a.p/cfgfile_rte_cfgfile.c.o 00:02:09.872 [217/740] Linking static target lib/librte_cfgfile.a 00:02:09.872 [218/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_validate.c.o 00:02:09.872 [219/740] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:09.872 [220/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_convert.c.o 00:02:09.872 [221/740] Generating lib/rte_compressdev_def with a custom command 00:02:09.872 [222/740] Generating lib/rte_compressdev_mingw with a custom command 00:02:10.131 [223/740] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:10.131 [224/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_jit_x86.c.o 00:02:10.131 [225/740] Linking static target lib/librte_bpf.a 00:02:10.131 [226/740] Compiling C object lib/librte_acl.a.p/acl_acl_run_avx512.c.o 00:02:10.131 [227/740] Generating lib/cfgfile.sym_chk with a custom command (wrapped by meson to capture output) 00:02:10.131 [228/740] Linking static target lib/librte_acl.a 00:02:10.131 [229/740] Generating lib/rte_cryptodev_def with a custom command 00:02:10.131 [230/740] Generating lib/rte_cryptodev_mingw with a custom command 00:02:10.390 [231/740] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:10.390 [232/740] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:10.390 [233/740] Generating lib/bpf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:10.390 [234/740] Generating lib/rte_distributor_def with a custom command 00:02:10.390 [235/740] Generating lib/acl.sym_chk with a custom command (wrapped by meson to capture output) 00:02:10.390 [236/740] Generating lib/rte_distributor_mingw with a custom command 00:02:10.390 [237/740] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:10.390 [238/740] Linking static target lib/librte_compressdev.a 00:02:10.390 [239/740] Generating lib/rte_efd_def with a custom command 00:02:10.390 [240/740] Generating lib/rte_efd_mingw with a custom command 00:02:10.390 [241/740] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:10.650 [242/740] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_match_sse.c.o 00:02:10.650 [243/740] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_single.c.o 00:02:10.909 [244/740] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor.c.o 00:02:10.909 [245/740] Linking static target lib/librte_distributor.a 00:02:10.909 [246/740] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_private.c.o 00:02:11.168 [247/740] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_trace_points.c.o 00:02:11.168 [248/740] Generating lib/distributor.sym_chk with a custom command (wrapped by meson to capture output) 00:02:11.168 [249/740] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:11.426 [250/740] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_ring.c.o 00:02:11.426 [251/740] Generating lib/rte_eventdev_def with a custom command 00:02:11.426 [252/740] Generating lib/rte_eventdev_mingw with a custom command 00:02:11.426 [253/740] Compiling C object lib/librte_efd.a.p/efd_rte_efd.c.o 00:02:11.426 [254/740] Linking static target lib/librte_efd.a 00:02:11.684 [255/740] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_crypto_adapter.c.o 00:02:11.684 [256/740] Generating lib/efd.sym_chk with a custom command (wrapped by meson to capture output) 00:02:11.684 [257/740] Generating lib/rte_gpudev_def with a custom command 00:02:11.684 [258/740] Generating lib/rte_gpudev_mingw with a custom command 00:02:11.942 [259/740] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:11.942 [260/740] Linking static target lib/librte_cryptodev.a 00:02:11.942 [261/740] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_tx_adapter.c.o 00:02:11.942 [262/740] Compiling C object lib/librte_gpudev.a.p/gpudev_gpudev.c.o 00:02:11.942 [263/740] Linking static target lib/librte_gpudev.a 00:02:11.942 [264/740] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_timer_adapter.c.o 00:02:12.201 [265/740] Compiling C object lib/librte_gro.a.p/gro_rte_gro.c.o 00:02:12.201 [266/740] Compiling C object lib/librte_gro.a.p/gro_gro_tcp4.c.o 00:02:12.201 [267/740] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:12.201 [268/740] Generating lib/rte_gro_def with a custom command 00:02:12.201 [269/740] Generating lib/rte_gro_mingw with a custom command 00:02:12.459 [270/740] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_eventdev.c.o 00:02:12.459 [271/740] Linking target lib/librte_eal.so.23.0 00:02:12.459 [272/740] Compiling C object lib/librte_gro.a.p/gro_gro_udp4.c.o 00:02:12.459 [273/740] Generating symbol file lib/librte_eal.so.23.0.p/librte_eal.so.23.0.symbols 00:02:12.459 [274/740] Linking target lib/librte_ring.so.23.0 00:02:12.719 [275/740] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_tcp4.c.o 00:02:12.719 [276/740] Generating symbol file lib/librte_ring.so.23.0.p/librte_ring.so.23.0.symbols 00:02:12.719 [277/740] Linking target lib/librte_meter.so.23.0 00:02:12.719 [278/740] Linking target lib/librte_rcu.so.23.0 00:02:12.719 [279/740] Linking target lib/librte_mempool.so.23.0 00:02:12.719 [280/740] Compiling C object lib/librte_gso.a.p/gso_gso_tcp4.c.o 00:02:12.719 [281/740] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_udp4.c.o 00:02:12.719 [282/740] Generating symbol file lib/librte_rcu.so.23.0.p/librte_rcu.so.23.0.symbols 00:02:12.719 [283/740] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:12.719 [284/740] Compiling C object lib/librte_gso.a.p/gso_gso_udp4.c.o 00:02:12.719 [285/740] Generating symbol file lib/librte_meter.so.23.0.p/librte_meter.so.23.0.symbols 00:02:12.719 [286/740] Linking target lib/librte_pci.so.23.0 00:02:12.719 [287/740] Linking target lib/librte_timer.so.23.0 00:02:12.719 [288/740] Generating symbol file lib/librte_mempool.so.23.0.p/librte_mempool.so.23.0.symbols 00:02:12.719 [289/740] Linking static target lib/librte_gro.a 00:02:12.719 [290/740] Linking target lib/librte_cfgfile.so.23.0 00:02:12.719 [291/740] Linking target lib/librte_acl.so.23.0 00:02:12.978 [292/740] Linking target lib/librte_mbuf.so.23.0 00:02:12.978 [293/740] Generating lib/gpudev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:12.978 [294/740] Compiling C object lib/librte_gso.a.p/gso_gso_common.c.o 00:02:12.978 [295/740] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_rx_adapter.c.o 00:02:12.978 [296/740] Linking static target lib/librte_eventdev.a 00:02:12.978 [297/740] Generating symbol file lib/librte_timer.so.23.0.p/librte_timer.so.23.0.symbols 00:02:12.978 [298/740] Generating symbol file lib/librte_pci.so.23.0.p/librte_pci.so.23.0.symbols 00:02:12.978 [299/740] Generating lib/rte_gso_mingw with a custom command 00:02:12.978 [300/740] Generating symbol file lib/librte_acl.so.23.0.p/librte_acl.so.23.0.symbols 00:02:12.978 [301/740] Generating lib/rte_gso_def with a custom command 00:02:12.978 [302/740] Generating symbol file lib/librte_mbuf.so.23.0.p/librte_mbuf.so.23.0.symbols 00:02:12.978 [303/740] Linking target lib/librte_net.so.23.0 00:02:12.978 [304/740] Generating lib/gro.sym_chk with a custom command (wrapped by meson to capture output) 00:02:12.978 [305/740] Linking target lib/librte_bbdev.so.23.0 00:02:13.237 [306/740] Generating symbol file lib/librte_net.so.23.0.p/librte_net.so.23.0.symbols 00:02:13.237 [307/740] Linking target lib/librte_compressdev.so.23.0 00:02:13.237 [308/740] Linking target lib/librte_cmdline.so.23.0 00:02:13.237 [309/740] Linking target lib/librte_ethdev.so.23.0 00:02:13.237 [310/740] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_tcp4.c.o 00:02:13.237 [311/740] Linking target lib/librte_hash.so.23.0 00:02:13.237 [312/740] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_udp4.c.o 00:02:13.237 [313/740] Linking target lib/librte_distributor.so.23.0 00:02:13.237 [314/740] Compiling C object lib/librte_gso.a.p/gso_rte_gso.c.o 00:02:13.237 [315/740] Linking static target lib/librte_gso.a 00:02:13.237 [316/740] Linking target lib/librte_gpudev.so.23.0 00:02:13.237 [317/740] Generating symbol file lib/librte_ethdev.so.23.0.p/librte_ethdev.so.23.0.symbols 00:02:13.237 [318/740] Generating symbol file lib/librte_hash.so.23.0.p/librte_hash.so.23.0.symbols 00:02:13.237 [319/740] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_reassembly.c.o 00:02:13.237 [320/740] Linking target lib/librte_metrics.so.23.0 00:02:13.495 [321/740] Linking target lib/librte_bpf.so.23.0 00:02:13.495 [322/740] Linking target lib/librte_efd.so.23.0 00:02:13.495 [323/740] Generating lib/gso.sym_chk with a custom command (wrapped by meson to capture output) 00:02:13.495 [324/740] Linking target lib/librte_gro.so.23.0 00:02:13.495 [325/740] Linking target lib/librte_gso.so.23.0 00:02:13.495 [326/740] Generating symbol file lib/librte_metrics.so.23.0.p/librte_metrics.so.23.0.symbols 00:02:13.495 [327/740] Generating lib/rte_ip_frag_def with a custom command 00:02:13.495 [328/740] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_fragmentation.c.o 00:02:13.495 [329/740] Generating lib/rte_ip_frag_mingw with a custom command 00:02:13.495 [330/740] Linking target lib/librte_bitratestats.so.23.0 00:02:13.495 [331/740] Generating symbol file lib/librte_bpf.so.23.0.p/librte_bpf.so.23.0.symbols 00:02:13.496 [332/740] Generating lib/rte_jobstats_def with a custom command 00:02:13.496 [333/740] Generating lib/rte_jobstats_mingw with a custom command 00:02:13.496 [334/740] Generating lib/rte_latencystats_def with a custom command 00:02:13.496 [335/740] Generating lib/rte_latencystats_mingw with a custom command 00:02:13.496 [336/740] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_fragmentation.c.o 00:02:13.754 [337/740] Compiling C object lib/librte_jobstats.a.p/jobstats_rte_jobstats.c.o 00:02:13.754 [338/740] Generating lib/rte_lpm_def with a custom command 00:02:13.754 [339/740] Linking static target lib/librte_jobstats.a 00:02:13.754 [340/740] Generating lib/rte_lpm_mingw with a custom command 00:02:13.754 [341/740] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_reassembly.c.o 00:02:13.754 [342/740] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ip_frag_common.c.o 00:02:14.013 [343/740] Generating lib/jobstats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:14.013 [344/740] Linking target lib/librte_jobstats.so.23.0 00:02:14.013 [345/740] Compiling C object lib/librte_latencystats.a.p/latencystats_rte_latencystats.c.o 00:02:14.013 [346/740] Compiling C object lib/librte_ip_frag.a.p/ip_frag_ip_frag_internal.c.o 00:02:14.013 [347/740] Linking static target lib/librte_latencystats.a 00:02:14.013 [348/740] Linking static target lib/librte_ip_frag.a 00:02:14.014 [349/740] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:14.014 [350/740] Linking target lib/librte_cryptodev.so.23.0 00:02:14.014 [351/740] Compiling C object lib/librte_member.a.p/member_rte_member.c.o 00:02:14.014 [352/740] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm.c.o 00:02:14.014 [353/740] Compiling C object lib/member/libsketch_avx512_tmp.a.p/rte_member_sketch_avx512.c.o 00:02:14.014 [354/740] Linking static target lib/member/libsketch_avx512_tmp.a 00:02:14.273 [355/740] Generating lib/rte_member_def with a custom command 00:02:14.273 [356/740] Generating lib/rte_member_mingw with a custom command 00:02:14.273 [357/740] Generating lib/latencystats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:14.273 [358/740] Generating lib/rte_pcapng_def with a custom command 00:02:14.273 [359/740] Generating symbol file lib/librte_cryptodev.so.23.0.p/librte_cryptodev.so.23.0.symbols 00:02:14.273 [360/740] Linking target lib/librte_latencystats.so.23.0 00:02:14.273 [361/740] Generating lib/rte_pcapng_mingw with a custom command 00:02:14.273 [362/740] Generating lib/ip_frag.sym_chk with a custom command (wrapped by meson to capture output) 00:02:14.273 [363/740] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:14.273 [364/740] Linking target lib/librte_ip_frag.so.23.0 00:02:14.273 [365/740] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:14.273 [366/740] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:14.273 [367/740] Generating symbol file lib/librte_ip_frag.so.23.0.p/librte_ip_frag.so.23.0.symbols 00:02:14.532 [368/740] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm6.c.o 00:02:14.532 [369/740] Linking static target lib/librte_lpm.a 00:02:14.532 [370/740] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:14.532 [371/740] Compiling C object lib/librte_member.a.p/member_rte_member_vbf.c.o 00:02:14.532 [372/740] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:14.792 [373/740] Compiling C object lib/librte_power.a.p/power_rte_power_empty_poll.c.o 00:02:14.792 [374/740] Compiling C object lib/librte_member.a.p/member_rte_member_ht.c.o 00:02:14.792 [375/740] Generating lib/eventdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:14.792 [376/740] Generating lib/rte_power_def with a custom command 00:02:14.792 [377/740] Generating lib/rte_power_mingw with a custom command 00:02:14.792 [378/740] Generating lib/rte_rawdev_def with a custom command 00:02:14.792 [379/740] Generating lib/lpm.sym_chk with a custom command (wrapped by meson to capture output) 00:02:14.792 [380/740] Linking target lib/librte_eventdev.so.23.0 00:02:14.792 [381/740] Generating lib/rte_rawdev_mingw with a custom command 00:02:14.792 [382/740] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:14.792 [383/740] Generating lib/rte_regexdev_def with a custom command 00:02:14.792 [384/740] Linking target lib/librte_lpm.so.23.0 00:02:14.792 [385/740] Compiling C object lib/librte_pcapng.a.p/pcapng_rte_pcapng.c.o 00:02:14.792 [386/740] Linking static target lib/librte_pcapng.a 00:02:14.792 [387/740] Generating lib/rte_regexdev_mingw with a custom command 00:02:14.792 [388/740] Generating symbol file lib/librte_eventdev.so.23.0.p/librte_eventdev.so.23.0.symbols 00:02:14.792 [389/740] Generating lib/rte_dmadev_def with a custom command 00:02:14.792 [390/740] Generating symbol file lib/librte_lpm.so.23.0.p/librte_lpm.so.23.0.symbols 00:02:14.792 [391/740] Generating lib/rte_dmadev_mingw with a custom command 00:02:15.051 [392/740] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:15.051 [393/740] Generating lib/rte_rib_def with a custom command 00:02:15.051 [394/740] Compiling C object lib/librte_rawdev.a.p/rawdev_rte_rawdev.c.o 00:02:15.051 [395/740] Linking static target lib/librte_rawdev.a 00:02:15.051 [396/740] Generating lib/rte_rib_mingw with a custom command 00:02:15.051 [397/740] Compiling C object lib/librte_power.a.p/power_rte_power_intel_uncore.c.o 00:02:15.051 [398/740] Generating lib/rte_reorder_def with a custom command 00:02:15.051 [399/740] Generating lib/pcapng.sym_chk with a custom command (wrapped by meson to capture output) 00:02:15.051 [400/740] Generating lib/rte_reorder_mingw with a custom command 00:02:15.051 [401/740] Linking target lib/librte_pcapng.so.23.0 00:02:15.051 [402/740] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:15.051 [403/740] Linking static target lib/librte_dmadev.a 00:02:15.051 [404/740] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:15.051 [405/740] Linking static target lib/librte_power.a 00:02:15.311 [406/740] Generating symbol file lib/librte_pcapng.so.23.0.p/librte_pcapng.so.23.0.symbols 00:02:15.311 [407/740] Compiling C object lib/librte_regexdev.a.p/regexdev_rte_regexdev.c.o 00:02:15.311 [408/740] Linking static target lib/librte_regexdev.a 00:02:15.311 [409/740] Compiling C object lib/librte_sched.a.p/sched_rte_red.c.o 00:02:15.311 [410/740] Compiling C object lib/librte_rib.a.p/rib_rte_rib.c.o 00:02:15.311 [411/740] Compiling C object lib/librte_sched.a.p/sched_rte_approx.c.o 00:02:15.311 [412/740] Generating lib/rawdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:15.311 [413/740] Generating lib/rte_sched_def with a custom command 00:02:15.613 [414/740] Generating lib/rte_sched_mingw with a custom command 00:02:15.613 [415/740] Linking target lib/librte_rawdev.so.23.0 00:02:15.613 [416/740] Compiling C object lib/librte_sched.a.p/sched_rte_pie.c.o 00:02:15.613 [417/740] Generating lib/rte_security_def with a custom command 00:02:15.613 [418/740] Generating lib/rte_security_mingw with a custom command 00:02:15.613 [419/740] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:15.613 [420/740] Linking static target lib/librte_reorder.a 00:02:15.613 [421/740] Compiling C object lib/librte_stack.a.p/stack_rte_stack_std.c.o 00:02:15.613 [422/740] Compiling C object lib/librte_stack.a.p/stack_rte_stack.c.o 00:02:15.613 [423/740] Generating lib/rte_stack_def with a custom command 00:02:15.613 [424/740] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:15.613 [425/740] Compiling C object lib/librte_member.a.p/member_rte_member_sketch.c.o 00:02:15.613 [426/740] Compiling C object lib/librte_stack.a.p/stack_rte_stack_lf.c.o 00:02:15.613 [427/740] Generating lib/rte_stack_mingw with a custom command 00:02:15.613 [428/740] Linking static target lib/librte_member.a 00:02:15.613 [429/740] Linking static target lib/librte_stack.a 00:02:15.613 [430/740] Compiling C object lib/librte_rib.a.p/rib_rte_rib6.c.o 00:02:15.613 [431/740] Linking static target lib/librte_rib.a 00:02:15.613 [432/740] Linking target lib/librte_dmadev.so.23.0 00:02:15.872 [433/740] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:15.872 [434/740] Linking target lib/librte_reorder.so.23.0 00:02:15.872 [435/740] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:15.872 [436/740] Generating symbol file lib/librte_dmadev.so.23.0.p/librte_dmadev.so.23.0.symbols 00:02:15.872 [437/740] Generating lib/stack.sym_chk with a custom command (wrapped by meson to capture output) 00:02:15.872 [438/740] Linking target lib/librte_stack.so.23.0 00:02:15.872 [439/740] Generating lib/regexdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:15.872 [440/740] Generating lib/member.sym_chk with a custom command (wrapped by meson to capture output) 00:02:15.872 [441/740] Linking target lib/librte_regexdev.so.23.0 00:02:16.131 [442/740] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:16.131 [443/740] Linking target lib/librte_member.so.23.0 00:02:16.131 [444/740] Linking static target lib/librte_security.a 00:02:16.131 [445/740] Generating lib/rib.sym_chk with a custom command (wrapped by meson to capture output) 00:02:16.131 [446/740] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:16.131 [447/740] Linking target lib/librte_rib.so.23.0 00:02:16.131 [448/740] Linking target lib/librte_power.so.23.0 00:02:16.131 [449/740] Generating symbol file lib/librte_rib.so.23.0.p/librte_rib.so.23.0.symbols 00:02:16.131 [450/740] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:16.131 [451/740] Generating lib/rte_vhost_mingw with a custom command 00:02:16.131 [452/740] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:16.391 [453/740] Generating lib/rte_vhost_def with a custom command 00:02:16.391 [454/740] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:16.391 [455/740] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:16.391 [456/740] Linking target lib/librte_security.so.23.0 00:02:16.391 [457/740] Compiling C object lib/librte_sched.a.p/sched_rte_sched.c.o 00:02:16.391 [458/740] Linking static target lib/librte_sched.a 00:02:16.651 [459/740] Generating symbol file lib/librte_security.so.23.0.p/librte_security.so.23.0.symbols 00:02:16.651 [460/740] Compiling C object lib/librte_ipsec.a.p/ipsec_ses.c.o 00:02:16.910 [461/740] Compiling C object lib/librte_ipsec.a.p/ipsec_sa.c.o 00:02:16.911 [462/740] Generating lib/rte_ipsec_def with a custom command 00:02:16.911 [463/740] Generating lib/rte_ipsec_mingw with a custom command 00:02:16.911 [464/740] Generating lib/sched.sym_chk with a custom command (wrapped by meson to capture output) 00:02:16.911 [465/740] Linking target lib/librte_sched.so.23.0 00:02:16.911 [466/740] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:16.911 [467/740] Compiling C object lib/librte_fib.a.p/fib_rte_fib.c.o 00:02:16.911 [468/740] Generating symbol file lib/librte_sched.so.23.0.p/librte_sched.so.23.0.symbols 00:02:17.170 [469/740] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:17.170 [470/740] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_telemetry.c.o 00:02:17.170 [471/740] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_sad.c.o 00:02:17.170 [472/740] Generating lib/rte_fib_def with a custom command 00:02:17.170 [473/740] Generating lib/rte_fib_mingw with a custom command 00:02:17.170 [474/740] Compiling C object lib/librte_fib.a.p/fib_rte_fib6.c.o 00:02:17.430 [475/740] Compiling C object lib/librte_fib.a.p/fib_dir24_8_avx512.c.o 00:02:17.688 [476/740] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_outb.c.o 00:02:17.688 [477/740] Compiling C object lib/librte_fib.a.p/fib_trie_avx512.c.o 00:02:17.688 [478/740] Compiling C object lib/librte_fib.a.p/fib_trie.c.o 00:02:17.688 [479/740] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_inb.c.o 00:02:17.688 [480/740] Linking static target lib/librte_ipsec.a 00:02:17.947 [481/740] Compiling C object lib/librte_fib.a.p/fib_dir24_8.c.o 00:02:17.947 [482/740] Linking static target lib/librte_fib.a 00:02:17.947 [483/740] Compiling C object lib/librte_port.a.p/port_rte_port_fd.c.o 00:02:17.947 [484/740] Compiling C object lib/librte_port.a.p/port_rte_port_ethdev.c.o 00:02:17.947 [485/740] Generating lib/ipsec.sym_chk with a custom command (wrapped by meson to capture output) 00:02:17.947 [486/740] Linking target lib/librte_ipsec.so.23.0 00:02:17.947 [487/740] Compiling C object lib/librte_port.a.p/port_rte_port_sched.c.o 00:02:18.207 [488/740] Compiling C object lib/librte_port.a.p/port_rte_port_frag.c.o 00:02:18.207 [489/740] Generating lib/fib.sym_chk with a custom command (wrapped by meson to capture output) 00:02:18.207 [490/740] Compiling C object lib/librte_port.a.p/port_rte_port_ras.c.o 00:02:18.207 [491/740] Linking target lib/librte_fib.so.23.0 00:02:18.781 [492/740] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ethdev.c.o 00:02:18.781 [493/740] Compiling C object lib/librte_port.a.p/port_rte_port_sym_crypto.c.o 00:02:18.781 [494/740] Generating lib/rte_port_def with a custom command 00:02:18.781 [495/740] Compiling C object lib/librte_port.a.p/port_rte_swx_port_fd.c.o 00:02:18.781 [496/740] Generating lib/rte_port_mingw with a custom command 00:02:18.781 [497/740] Compiling C object lib/librte_port.a.p/port_rte_port_source_sink.c.o 00:02:18.781 [498/740] Generating lib/rte_pdump_def with a custom command 00:02:18.781 [499/740] Generating lib/rte_pdump_mingw with a custom command 00:02:18.781 [500/740] Compiling C object lib/librte_port.a.p/port_rte_port_eventdev.c.o 00:02:18.781 [501/740] Compiling C object lib/librte_table.a.p/table_rte_swx_keycmp.c.o 00:02:18.781 [502/740] Compiling C object lib/librte_port.a.p/port_rte_swx_port_source_sink.c.o 00:02:19.044 [503/740] Compiling C object lib/librte_table.a.p/table_rte_swx_table_learner.c.o 00:02:19.044 [504/740] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ring.c.o 00:02:19.044 [505/740] Compiling C object lib/librte_table.a.p/table_rte_swx_table_em.c.o 00:02:19.044 [506/740] Compiling C object lib/librte_port.a.p/port_rte_port_ring.c.o 00:02:19.302 [507/740] Linking static target lib/librte_port.a 00:02:19.302 [508/740] Compiling C object lib/librte_table.a.p/table_rte_swx_table_selector.c.o 00:02:19.302 [509/740] Compiling C object lib/librte_table.a.p/table_rte_swx_table_wm.c.o 00:02:19.302 [510/740] Compiling C object lib/librte_table.a.p/table_rte_table_array.c.o 00:02:19.302 [511/740] Compiling C object lib/librte_table.a.p/table_rte_table_hash_cuckoo.c.o 00:02:19.302 [512/740] Compiling C object lib/librte_table.a.p/table_rte_table_acl.c.o 00:02:19.560 [513/740] Compiling C object lib/librte_pdump.a.p/pdump_rte_pdump.c.o 00:02:19.560 [514/740] Linking static target lib/librte_pdump.a 00:02:19.560 [515/740] Generating lib/port.sym_chk with a custom command (wrapped by meson to capture output) 00:02:19.819 [516/740] Linking target lib/librte_port.so.23.0 00:02:19.819 [517/740] Compiling C object lib/librte_table.a.p/table_rte_table_lpm.c.o 00:02:19.819 [518/740] Compiling C object lib/librte_table.a.p/table_rte_table_hash_ext.c.o 00:02:19.819 [519/740] Generating lib/pdump.sym_chk with a custom command (wrapped by meson to capture output) 00:02:19.819 [520/740] Generating symbol file lib/librte_port.so.23.0.p/librte_port.so.23.0.symbols 00:02:19.819 [521/740] Linking target lib/librte_pdump.so.23.0 00:02:19.819 [522/740] Generating lib/rte_table_def with a custom command 00:02:19.819 [523/740] Generating lib/rte_table_mingw with a custom command 00:02:20.077 [524/740] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key8.c.o 00:02:20.077 [525/740] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key16.c.o 00:02:20.078 [526/740] Compiling C object lib/librte_table.a.p/table_rte_table_stub.c.o 00:02:20.078 [527/740] Compiling C object lib/librte_table.a.p/table_rte_table_lpm_ipv6.c.o 00:02:20.336 [528/740] Compiling C object lib/librte_table.a.p/table_rte_table_hash_lru.c.o 00:02:20.336 [529/740] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key32.c.o 00:02:20.336 [530/740] Linking static target lib/librte_table.a 00:02:20.336 [531/740] Generating lib/rte_pipeline_def with a custom command 00:02:20.336 [532/740] Generating lib/rte_pipeline_mingw with a custom command 00:02:20.336 [533/740] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_port_in_action.c.o 00:02:20.594 [534/740] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_pipeline.c.o 00:02:20.594 [535/740] Compiling C object lib/librte_graph.a.p/graph_node.c.o 00:02:20.852 [536/740] Compiling C object lib/librte_graph.a.p/graph_graph.c.o 00:02:20.852 [537/740] Generating lib/table.sym_chk with a custom command (wrapped by meson to capture output) 00:02:20.853 [538/740] Linking target lib/librte_table.so.23.0 00:02:20.853 [539/740] Compiling C object lib/librte_graph.a.p/graph_graph_ops.c.o 00:02:21.111 [540/740] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:21.111 [541/740] Generating lib/rte_graph_def with a custom command 00:02:21.111 [542/740] Compiling C object lib/librte_graph.a.p/graph_graph_debug.c.o 00:02:21.111 [543/740] Generating lib/rte_graph_mingw with a custom command 00:02:21.111 [544/740] Generating symbol file lib/librte_table.so.23.0.p/librte_table.so.23.0.symbols 00:02:21.111 [545/740] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ctl.c.o 00:02:21.370 [546/740] Compiling C object lib/librte_graph.a.p/graph_graph_stats.c.o 00:02:21.370 [547/740] Compiling C object lib/librte_graph.a.p/graph_graph_populate.c.o 00:02:21.370 [548/740] Linking static target lib/librte_graph.a 00:02:21.370 [549/740] Compiling C object lib/librte_node.a.p/node_ethdev_ctrl.c.o 00:02:21.628 [550/740] Compiling C object lib/librte_node.a.p/node_ethdev_tx.c.o 00:02:21.628 [551/740] Compiling C object lib/librte_node.a.p/node_ethdev_rx.c.o 00:02:21.628 [552/740] Compiling C object lib/librte_node.a.p/node_null.c.o 00:02:21.888 [553/740] Compiling C object lib/librte_node.a.p/node_log.c.o 00:02:21.888 [554/740] Generating lib/rte_node_def with a custom command 00:02:21.888 [555/740] Generating lib/rte_node_mingw with a custom command 00:02:21.888 [556/740] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline_spec.c.o 00:02:21.888 [557/740] Compiling C object lib/librte_node.a.p/node_pkt_drop.c.o 00:02:21.888 [558/740] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:22.146 [559/740] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:22.146 [560/740] Compiling C object lib/librte_node.a.p/node_ip4_lookup.c.o 00:02:22.146 [561/740] Generating lib/graph.sym_chk with a custom command (wrapped by meson to capture output) 00:02:22.146 [562/740] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:22.146 [563/740] Linking target lib/librte_graph.so.23.0 00:02:22.146 [564/740] Generating drivers/rte_bus_pci_def with a custom command 00:02:22.146 [565/740] Compiling C object lib/librte_node.a.p/node_pkt_cls.c.o 00:02:22.146 [566/740] Generating drivers/rte_bus_pci_mingw with a custom command 00:02:22.146 [567/740] Compiling C object lib/librte_node.a.p/node_ip4_rewrite.c.o 00:02:22.146 [568/740] Linking static target lib/librte_node.a 00:02:22.146 [569/740] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:22.146 [570/740] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:22.405 [571/740] Generating symbol file lib/librte_graph.so.23.0.p/librte_graph.so.23.0.symbols 00:02:22.405 [572/740] Generating drivers/rte_bus_vdev_def with a custom command 00:02:22.405 [573/740] Generating drivers/rte_bus_vdev_mingw with a custom command 00:02:22.405 [574/740] Generating drivers/rte_mempool_ring_def with a custom command 00:02:22.405 [575/740] Generating drivers/rte_mempool_ring_mingw with a custom command 00:02:22.405 [576/740] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:22.405 [577/740] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:22.405 [578/740] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:22.405 [579/740] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:22.405 [580/740] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:22.405 [581/740] Generating lib/node.sym_chk with a custom command (wrapped by meson to capture output) 00:02:22.664 [582/740] Linking target lib/librte_node.so.23.0 00:02:22.664 [583/740] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:22.664 [584/740] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:22.664 [585/740] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:22.664 [586/740] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:22.664 [587/740] Compiling C object drivers/librte_bus_pci.so.23.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:22.664 [588/740] Linking static target drivers/librte_bus_vdev.a 00:02:22.664 [589/740] Linking static target drivers/librte_bus_pci.a 00:02:22.924 [590/740] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:22.924 [591/740] Compiling C object drivers/librte_bus_vdev.so.23.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:22.924 [592/740] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_diag.c.o 00:02:22.924 [593/740] Linking target drivers/librte_bus_vdev.so.23.0 00:02:22.924 [594/740] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_dcb.c.o 00:02:22.924 [595/740] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:22.924 [596/740] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_adminq.c.o 00:02:23.183 [597/740] Linking target drivers/librte_bus_pci.so.23.0 00:02:23.183 [598/740] Generating symbol file drivers/librte_bus_vdev.so.23.0.p/librte_bus_vdev.so.23.0.symbols 00:02:23.183 [599/740] Generating symbol file drivers/librte_bus_pci.so.23.0.p/librte_bus_pci.so.23.0.symbols 00:02:23.183 [600/740] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:23.183 [601/740] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:23.443 [602/740] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_hmc.c.o 00:02:23.443 [603/740] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:23.443 [604/740] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:23.443 [605/740] Linking static target drivers/librte_mempool_ring.a 00:02:23.443 [606/740] Compiling C object drivers/librte_mempool_ring.so.23.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:23.443 [607/740] Linking target drivers/librte_mempool_ring.so.23.0 00:02:23.701 [608/740] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_lan_hmc.c.o 00:02:23.701 [609/740] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_nvm.c.o 00:02:23.960 [610/740] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_common.c.o 00:02:23.960 [611/740] Linking static target drivers/net/i40e/base/libi40e_base.a 00:02:24.217 [612/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_pf.c.o 00:02:24.475 [613/740] Compiling C object drivers/net/i40e/libi40e_avx512_lib.a.p/i40e_rxtx_vec_avx512.c.o 00:02:24.475 [614/740] Linking static target drivers/net/i40e/libi40e_avx512_lib.a 00:02:24.734 [615/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_tm.c.o 00:02:24.734 [616/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_fdir.c.o 00:02:24.992 [617/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_hash.c.o 00:02:24.992 [618/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_flow.c.o 00:02:24.992 [619/740] Generating drivers/rte_net_i40e_def with a custom command 00:02:24.992 [620/740] Generating drivers/rte_net_i40e_mingw with a custom command 00:02:24.992 [621/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_vf_representor.c.o 00:02:25.561 [622/740] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline.c.o 00:02:25.561 [623/740] Compiling C object app/dpdk-dumpcap.p/dumpcap_main.c.o 00:02:25.820 [624/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_sse.c.o 00:02:26.079 [625/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_rte_pmd_i40e.c.o 00:02:26.079 [626/740] Compiling C object app/dpdk-pdump.p/pdump_main.c.o 00:02:26.079 [627/740] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_main.c.o 00:02:26.079 [628/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx.c.o 00:02:26.339 [629/740] Compiling C object app/dpdk-test-acl.p/test-acl_main.c.o 00:02:26.339 [630/740] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_commands.c.o 00:02:26.339 [631/740] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_cmdline_test.c.o 00:02:26.339 [632/740] Compiling C object app/dpdk-proc-info.p/proc-info_main.c.o 00:02:26.339 [633/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_avx2.c.o 00:02:26.907 [634/740] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_options_parse.c.o 00:02:26.907 [635/740] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev.c.o 00:02:26.907 [636/740] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_vector.c.o 00:02:26.908 [637/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_ethdev.c.o 00:02:26.908 [638/740] Linking static target drivers/libtmp_rte_net_i40e.a 00:02:27.167 [639/740] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_throughput.c.o 00:02:27.167 [640/740] Generating drivers/rte_net_i40e.pmd.c with a custom command 00:02:27.167 [641/740] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_common.c.o 00:02:27.167 [642/740] Compiling C object drivers/librte_net_i40e.a.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:02:27.167 [643/740] Compiling C object drivers/librte_net_i40e.so.23.0.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:02:27.167 [644/740] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_cyclecount.c.o 00:02:27.167 [645/740] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_main.c.o 00:02:27.167 [646/740] Linking static target drivers/librte_net_i40e.a 00:02:27.429 [647/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_ops.c.o 00:02:27.429 [648/740] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_verify.c.o 00:02:27.688 [649/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_common.c.o 00:02:27.688 [650/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_options_parsing.c.o 00:02:27.688 [651/740] Generating drivers/rte_net_i40e.sym_chk with a custom command (wrapped by meson to capture output) 00:02:27.946 [652/740] Linking target drivers/librte_net_i40e.so.23.0 00:02:27.946 [653/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_pmd_cyclecount.c.o 00:02:27.946 [654/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vectors.c.o 00:02:27.946 [655/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vector_parsing.c.o 00:02:28.204 [656/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_latency.c.o 00:02:28.204 [657/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_test.c.o 00:02:28.204 [658/740] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:28.204 [659/740] Linking static target lib/librte_vhost.a 00:02:28.204 [660/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_parser.c.o 00:02:28.204 [661/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_throughput.c.o 00:02:28.463 [662/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_main.c.o 00:02:28.463 [663/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_verify.c.o 00:02:28.463 [664/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_options.c.o 00:02:28.463 [665/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_main.c.o 00:02:28.721 [666/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_atq.c.o 00:02:28.721 [667/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_common.c.o 00:02:28.979 [668/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_queue.c.o 00:02:29.261 [669/740] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:29.261 [670/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_atq.c.o 00:02:29.261 [671/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_queue.c.o 00:02:29.261 [672/740] Linking target lib/librte_vhost.so.23.0 00:02:29.261 [673/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_atq.c.o 00:02:29.557 [674/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_common.c.o 00:02:29.557 [675/740] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_flow_gen.c.o 00:02:29.557 [676/740] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_items_gen.c.o 00:02:29.816 [677/740] Compiling C object app/dpdk-test-fib.p/test-fib_main.c.o 00:02:29.816 [678/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_queue.c.o 00:02:29.817 [679/740] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_actions_gen.c.o 00:02:30.076 [680/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_common.c.o 00:02:30.076 [681/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_config.c.o 00:02:30.076 [682/740] Compiling C object app/dpdk-test-gpudev.p/test-gpudev_main.c.o 00:02:30.076 [683/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_init.c.o 00:02:30.076 [684/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_main.c.o 00:02:30.076 [685/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_acl.c.o 00:02:30.336 [686/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_hash.c.o 00:02:30.336 [687/740] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_main.c.o 00:02:30.336 [688/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm.c.o 00:02:30.336 [689/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm_ipv6.c.o 00:02:30.336 [690/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_stub.c.o 00:02:30.597 [691/740] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_perf.c.o 00:02:30.597 [692/740] Compiling C object app/dpdk-testpmd.p/test-pmd_5tswap.c.o 00:02:30.855 [693/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_runtime.c.o 00:02:30.856 [694/740] Compiling C object app/dpdk-testpmd.p/test-pmd_cmd_flex_item.c.o 00:02:31.115 [695/740] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_mtr.c.o 00:02:31.115 [696/740] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_tm.c.o 00:02:31.374 [697/740] Compiling C object app/dpdk-testpmd.p/test-pmd_flowgen.c.o 00:02:31.374 [698/740] Compiling C object app/dpdk-testpmd.p/test-pmd_icmpecho.c.o 00:02:31.374 [699/740] Compiling C object app/dpdk-testpmd.p/test-pmd_ieee1588fwd.c.o 00:02:31.634 [700/740] Compiling C object app/dpdk-testpmd.p/test-pmd_iofwd.c.o 00:02:31.634 [701/740] Compiling C object app/dpdk-testpmd.p/test-pmd_macfwd.c.o 00:02:31.894 [702/740] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline.c.o 00:02:31.894 [703/740] Compiling C object app/dpdk-testpmd.p/test-pmd_csumonly.c.o 00:02:31.894 [704/740] Compiling C object app/dpdk-testpmd.p/test-pmd_macswap.c.o 00:02:32.155 [705/740] Compiling C object app/dpdk-testpmd.p/test-pmd_rxonly.c.o 00:02:32.155 [706/740] Compiling C object app/dpdk-testpmd.p/test-pmd_shared_rxq_fwd.c.o 00:02:32.155 [707/740] Compiling C object app/dpdk-testpmd.p/test-pmd_parameters.c.o 00:02:32.414 [708/740] Compiling C object app/dpdk-testpmd.p/test-pmd_bpf_cmd.c.o 00:02:32.673 [709/740] Compiling C object app/dpdk-testpmd.p/test-pmd_util.c.o 00:02:32.673 [710/740] Compiling C object app/dpdk-testpmd.p/test-pmd_config.c.o 00:02:32.933 [711/740] Compiling C object app/dpdk-testpmd.p/.._drivers_net_i40e_i40e_testpmd.c.o 00:02:32.933 [712/740] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_flow.c.o 00:02:32.933 [713/740] Compiling C object app/dpdk-test-regex.p/test-regex_main.c.o 00:02:32.933 [714/740] Compiling C object app/dpdk-test-sad.p/test-sad_main.c.o 00:02:33.191 [715/740] Compiling C object app/dpdk-testpmd.p/test-pmd_txonly.c.o 00:02:33.191 [716/740] Compiling C object app/dpdk-test-security-perf.p/test-security-perf_test_security_perf.c.o 00:02:33.191 [717/740] Compiling C object app/dpdk-testpmd.p/test-pmd_noisy_vnf.c.o 00:02:33.191 [718/740] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_table_action.c.o 00:02:33.191 [719/740] Linking static target lib/librte_pipeline.a 00:02:33.450 [720/740] Compiling C object app/dpdk-testpmd.p/test-pmd_testpmd.c.o 00:02:33.708 [721/740] Compiling C object app/dpdk-test-security-perf.p/test_test_cryptodev_security_ipsec.c.o 00:02:33.708 [722/740] Linking target app/dpdk-dumpcap 00:02:33.708 [723/740] Linking target app/dpdk-test-acl 00:02:33.709 [724/740] Linking target app/dpdk-pdump 00:02:33.709 [725/740] Linking target app/dpdk-proc-info 00:02:33.709 [726/740] Linking target app/dpdk-test-bbdev 00:02:33.709 [727/740] Linking target app/dpdk-test-cmdline 00:02:33.709 [728/740] Linking target app/dpdk-test-compress-perf 00:02:33.967 [729/740] Linking target app/dpdk-test-crypto-perf 00:02:33.967 [730/740] Linking target app/dpdk-test-eventdev 00:02:34.227 [731/740] Linking target app/dpdk-test-flow-perf 00:02:34.227 [732/740] Linking target app/dpdk-test-gpudev 00:02:34.227 [733/740] Linking target app/dpdk-test-pipeline 00:02:34.227 [734/740] Linking target app/dpdk-test-fib 00:02:34.227 [735/740] Linking target app/dpdk-test-regex 00:02:34.227 [736/740] Linking target app/dpdk-testpmd 00:02:34.227 [737/740] Linking target app/dpdk-test-sad 00:02:34.227 [738/740] Linking target app/dpdk-test-security-perf 00:02:38.420 [739/740] Generating lib/pipeline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:38.420 [740/740] Linking target lib/librte_pipeline.so.23.0 00:02:38.420 00:06:52 build_native_dpdk -- common/autobuild_common.sh@187 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 install 00:02:38.420 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:02:38.420 [0/1] Installing files. 00:02:38.420 Installing subdir /home/vagrant/spdk_repo/dpdk/examples to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples 00:02:38.420 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:02:38.420 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:02:38.420 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:02:38.420 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:02:38.420 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:02:38.420 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/README to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:02:38.420 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/dummy.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:02:38.420 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t1.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:02:38.420 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t2.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:02:38.420 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t3.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:02:38.420 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:02:38.420 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:02:38.420 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:02:38.420 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:02:38.420 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:02:38.420 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:02:38.420 Installing /home/vagrant/spdk_repo/dpdk/examples/common/pkt_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common 00:02:38.420 Installing /home/vagrant/spdk_repo/dpdk/examples/common/altivec/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/altivec 00:02:38.420 Installing /home/vagrant/spdk_repo/dpdk/examples/common/neon/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/neon 00:02:38.420 Installing /home/vagrant/spdk_repo/dpdk/examples/common/sse/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/sse 00:02:38.420 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:02:38.420 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:02:38.420 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:02:38.420 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/dmafwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:02:38.420 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool 00:02:38.420 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:38.420 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:38.420 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:38.420 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:38.420 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:38.420 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:38.420 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:38.420 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:38.420 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:38.420 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:38.420 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:38.420 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_tx.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:38.420 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:38.420 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:38.420 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:38.420 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:38.420 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:38.420 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_aes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:38.421 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ccm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:38.421 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_cmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:38.421 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ecdsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:38.421 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_gcm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:38.421 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_hmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:38.421 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_rsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:38.421 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_sha.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:38.421 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_tdes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:38.421 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_xts.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:38.421 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:38.421 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_classify/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_classify 00:02:38.421 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_classify/flow_classify.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_classify 00:02:38.683 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_classify/ipv4_rules_file.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_classify 00:02:38.683 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:02:38.683 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/flow_blocks.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:02:38.683 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:02:38.683 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:02:38.683 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:02:38.683 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:02:38.683 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:02:38.683 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:38.683 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:38.683 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:38.683 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:38.683 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:38.683 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:38.683 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:38.683 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:38.683 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:38.683 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:38.683 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/kni.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:38.683 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/kni.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:38.683 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:38.683 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:38.683 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:38.683 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:38.683 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:38.683 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:38.683 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:38.683 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:38.683 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:38.683 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:38.683 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:38.683 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:38.683 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:38.683 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:38.683 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:38.683 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:38.684 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:38.684 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/firewall.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:38.684 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:38.684 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow_crypto.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:38.684 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/kni.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:38.684 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:38.684 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:38.684 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route_ecmp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:38.684 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/rss.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:38.684 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/tap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:38.684 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:02:38.684 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:02:38.684 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:38.684 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep0.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:38.684 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep1.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:38.684 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:38.684 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:38.684 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:38.684 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:38.684 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:38.684 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:38.684 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipip.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:38.684 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:38.684 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:38.684 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:38.684 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:38.684 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:38.684 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:38.684 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_process.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:38.684 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:38.684 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:38.684 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:38.684 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:38.684 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/rt.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:38.684 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:38.684 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:38.684 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:38.684 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp4.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:38.684 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp6.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:38.684 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/bypass_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:38.684 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:38.684 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs_secgw.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:38.684 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/data_rxtx.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:38.684 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/linux_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:38.684 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/load_env.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:38.684 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:38.684 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:38.684 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/run_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:38.684 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:38.684 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:38.684 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:38.684 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:38.684 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:38.684 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:38.684 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:38.684 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:38.684 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_ipv6opts.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:38.684 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:38.684 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:38.684 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:38.684 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:38.684 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:38.684 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:38.684 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:38.684 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:38.684 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_null_header_reconstruct.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:38.684 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:02:38.684 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:02:38.684 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:38.684 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:38.684 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:38.684 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/l2fwd-cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:38.684 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:02:38.684 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:02:38.684 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:38.684 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:38.684 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:38.684 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:38.684 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:38.685 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:38.685 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:38.685 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:38.685 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:38.685 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:38.685 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:02:38.685 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:02:38.685 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:38.685 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:38.685 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:38.685 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:38.685 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:02:38.685 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:02:38.685 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:02:38.685 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:02:38.685 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:02:38.685 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:02:38.685 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:38.685 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:38.685 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:38.685 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:38.685 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:38.685 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:38.685 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:38.685 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:38.685 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:38.685 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:38.685 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:38.685 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:38.685 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl_scalar.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:38.685 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:38.685 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:38.685 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:38.685 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:38.685 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:38.685 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:38.685 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:38.685 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_sequential.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:38.685 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:38.685 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:38.685 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:38.685 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:38.685 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_fib.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:38.685 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:38.685 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:38.685 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:38.685 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:38.685 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:38.685 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:38.685 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_route.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:38.685 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:38.685 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:38.685 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:38.685 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:38.685 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:38.685 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:02:38.685 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:02:38.685 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process 00:02:38.685 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp 00:02:38.685 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:02:38.685 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/client.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:02:38.685 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:38.685 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:38.685 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:38.685 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:38.685 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:38.685 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:38.685 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/shared 00:02:38.685 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:38.685 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:38.685 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:38.685 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:38.685 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:38.685 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:38.685 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:38.685 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:38.685 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:02:38.685 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:02:38.685 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:02:38.685 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/ntb_fwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:02:38.685 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:02:38.686 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:02:38.686 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:38.686 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:38.686 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:38.686 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:38.686 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:38.686 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:38.686 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:38.686 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:38.686 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:38.686 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:38.686 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ethdev.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:38.686 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:38.686 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:38.686 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_group_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:38.686 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:38.686 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_routing_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:38.686 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:38.686 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:38.686 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:38.686 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:38.686 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:38.686 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:38.686 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:38.686 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:38.686 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:38.686 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:38.686 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:38.686 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:38.686 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:38.686 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:38.686 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/packet.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:38.686 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/pcap.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:38.686 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:38.686 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:38.686 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:38.686 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:38.686 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:38.686 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:38.686 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:38.686 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:38.686 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:38.686 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:38.686 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:38.686 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:38.686 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:38.686 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:38.686 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:02:38.686 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/ptpclient.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:02:38.686 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:02:38.686 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:02:38.686 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:02:38.686 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:02:38.686 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:02:38.686 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:38.686 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/app_thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:38.686 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:38.686 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:38.686 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:38.686 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cmdline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:38.686 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:38.686 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:38.686 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:38.686 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:38.686 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_ov.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:38.686 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_pie.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:38.686 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_red.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:38.686 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/stats.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:38.686 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:02:38.686 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:02:38.686 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd 00:02:38.686 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/node/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/node 00:02:38.686 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/node/node.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/node 00:02:38.686 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:02:38.686 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:02:38.686 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:02:38.686 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:02:38.686 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:02:38.686 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:02:38.686 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/shared 00:02:38.686 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:02:38.687 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:02:38.687 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:02:38.687 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/basicfwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:02:38.687 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:02:38.687 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:02:38.687 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:02:38.687 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:02:38.687 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/vdpa_blk_compact.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:02:38.687 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:02:38.687 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:02:38.687 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:02:38.687 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/virtio_net.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:02:38.687 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:02:38.687 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:02:38.687 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk_spec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:02:38.687 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:02:38.687 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:02:38.687 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk_compat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:02:38.687 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:02:38.687 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:02:38.687 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:38.687 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:38.687 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:38.687 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:38.687 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:38.687 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:38.687 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:38.687 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_nop.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:38.687 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_x86.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:38.687 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:38.687 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:38.687 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:38.687 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:38.687 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:38.687 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:38.687 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:38.687 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:38.687 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:38.687 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:38.687 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:38.687 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:38.687 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:02:38.687 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:02:38.687 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:02:38.687 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:02:38.687 Installing lib/librte_kvargs.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:38.687 Installing lib/librte_kvargs.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:38.687 Installing lib/librte_telemetry.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:38.687 Installing lib/librte_telemetry.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:38.687 Installing lib/librte_eal.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:38.687 Installing lib/librte_eal.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:38.687 Installing lib/librte_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:38.687 Installing lib/librte_ring.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:38.687 Installing lib/librte_rcu.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:38.687 Installing lib/librte_rcu.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:38.687 Installing lib/librte_mempool.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:38.687 Installing lib/librte_mempool.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:38.687 Installing lib/librte_mbuf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:38.687 Installing lib/librte_mbuf.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:38.687 Installing lib/librte_net.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:38.687 Installing lib/librte_net.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:38.687 Installing lib/librte_meter.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:38.687 Installing lib/librte_meter.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:38.687 Installing lib/librte_ethdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:38.687 Installing lib/librte_ethdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:38.687 Installing lib/librte_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:38.687 Installing lib/librte_pci.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:38.687 Installing lib/librte_cmdline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:38.687 Installing lib/librte_cmdline.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:38.687 Installing lib/librte_metrics.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:38.687 Installing lib/librte_metrics.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:38.687 Installing lib/librte_hash.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:38.687 Installing lib/librte_hash.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:38.687 Installing lib/librte_timer.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:38.687 Installing lib/librte_timer.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:38.687 Installing lib/librte_acl.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:38.687 Installing lib/librte_acl.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:38.687 Installing lib/librte_bbdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:38.687 Installing lib/librte_bbdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:38.687 Installing lib/librte_bitratestats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:38.687 Installing lib/librte_bitratestats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:38.687 Installing lib/librte_bpf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:38.687 Installing lib/librte_bpf.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:38.687 Installing lib/librte_cfgfile.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:38.687 Installing lib/librte_cfgfile.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:38.687 Installing lib/librte_compressdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:38.687 Installing lib/librte_compressdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:38.687 Installing lib/librte_cryptodev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:38.687 Installing lib/librte_cryptodev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:38.687 Installing lib/librte_distributor.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:38.687 Installing lib/librte_distributor.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:38.687 Installing lib/librte_efd.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:38.687 Installing lib/librte_efd.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:38.687 Installing lib/librte_eventdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:38.687 Installing lib/librte_eventdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:38.687 Installing lib/librte_gpudev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:38.687 Installing lib/librte_gpudev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:38.687 Installing lib/librte_gro.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:38.687 Installing lib/librte_gro.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:38.687 Installing lib/librte_gso.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:38.687 Installing lib/librte_gso.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:38.687 Installing lib/librte_ip_frag.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:38.687 Installing lib/librte_ip_frag.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:38.687 Installing lib/librte_jobstats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:38.687 Installing lib/librte_jobstats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:38.687 Installing lib/librte_latencystats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:38.688 Installing lib/librte_latencystats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:38.688 Installing lib/librte_lpm.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:38.688 Installing lib/librte_lpm.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:38.688 Installing lib/librte_member.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:38.688 Installing lib/librte_member.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:38.688 Installing lib/librte_pcapng.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:38.688 Installing lib/librte_pcapng.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:38.688 Installing lib/librte_power.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:38.688 Installing lib/librte_power.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:38.688 Installing lib/librte_rawdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:38.688 Installing lib/librte_rawdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:38.688 Installing lib/librte_regexdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:38.688 Installing lib/librte_regexdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:38.688 Installing lib/librte_dmadev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:38.688 Installing lib/librte_dmadev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:38.688 Installing lib/librte_rib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:38.688 Installing lib/librte_rib.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:38.688 Installing lib/librte_reorder.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:38.688 Installing lib/librte_reorder.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:38.688 Installing lib/librte_sched.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:38.688 Installing lib/librte_sched.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:38.688 Installing lib/librte_security.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:38.688 Installing lib/librte_security.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:38.688 Installing lib/librte_stack.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:38.688 Installing lib/librte_stack.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:38.688 Installing lib/librte_vhost.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:38.688 Installing lib/librte_vhost.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:38.688 Installing lib/librte_ipsec.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:38.688 Installing lib/librte_ipsec.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:38.688 Installing lib/librte_fib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:38.688 Installing lib/librte_fib.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:38.688 Installing lib/librte_port.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:38.688 Installing lib/librte_port.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:38.688 Installing lib/librte_pdump.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:38.688 Installing lib/librte_pdump.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:38.688 Installing lib/librte_table.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:38.688 Installing lib/librte_table.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:38.688 Installing lib/librte_pipeline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:38.688 Installing lib/librte_pipeline.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:38.688 Installing lib/librte_graph.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:38.688 Installing lib/librte_graph.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:38.688 Installing lib/librte_node.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:38.688 Installing lib/librte_node.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:38.688 Installing drivers/librte_bus_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:38.688 Installing drivers/librte_bus_pci.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0 00:02:38.688 Installing drivers/librte_bus_vdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:38.688 Installing drivers/librte_bus_vdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0 00:02:38.688 Installing drivers/librte_mempool_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:38.688 Installing drivers/librte_mempool_ring.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0 00:02:38.688 Installing drivers/librte_net_i40e.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:38.688 Installing drivers/librte_net_i40e.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0 00:02:38.688 Installing app/dpdk-dumpcap to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:38.688 Installing app/dpdk-pdump to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:38.688 Installing app/dpdk-proc-info to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:38.688 Installing app/dpdk-test-acl to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:38.688 Installing app/dpdk-test-bbdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:38.688 Installing app/dpdk-test-cmdline to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:38.688 Installing app/dpdk-test-compress-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:38.688 Installing app/dpdk-test-crypto-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:38.688 Installing app/dpdk-test-eventdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:38.948 Installing app/dpdk-test-fib to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:38.948 Installing app/dpdk-test-flow-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:38.948 Installing app/dpdk-test-gpudev to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:38.948 Installing app/dpdk-test-pipeline to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:38.948 Installing app/dpdk-testpmd to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:38.948 Installing app/dpdk-test-regex to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:38.948 Installing app/dpdk-test-sad to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:38.948 Installing app/dpdk-test-security-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:38.948 Installing /home/vagrant/spdk_repo/dpdk/config/rte_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.948 Installing /home/vagrant/spdk_repo/dpdk/lib/kvargs/rte_kvargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.948 Installing /home/vagrant/spdk_repo/dpdk/lib/telemetry/rte_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.948 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:38.948 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:38.948 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:38.948 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:38.948 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:38.948 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:38.948 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:38.948 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:38.948 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:38.948 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:38.948 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:38.948 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:38.948 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.948 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.948 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.948 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.948 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.948 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.948 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.948 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.948 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.948 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rtm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.948 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.948 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.948 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.948 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.948 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.948 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.948 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.948 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_alarm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.948 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitmap.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.948 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.948 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_branch_prediction.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.948 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bus.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.948 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_class.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.948 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.948 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_compat.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.948 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_debug.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.948 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_dev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.948 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_devargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.948 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.948 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_memconfig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.948 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.948 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_errno.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.948 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_epoll.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.948 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_fbarray.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.948 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hexdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.948 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hypervisor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.948 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_interrupts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.948 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_keepalive.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.948 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_launch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.948 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.948 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_log.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.948 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_malloc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.948 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_mcslock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.948 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memory.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.948 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memzone.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.948 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_feature_defs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.948 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_features.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.948 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_per_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.948 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pflock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.948 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_random.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.948 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_reciprocal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.948 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqcount.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.948 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.948 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.948 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service_component.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.948 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_string_fns.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.948 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_tailq.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.948 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_thread.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.948 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_ticketlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.948 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_time.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.948 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.948 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.948 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point_register.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.948 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_uuid.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.948 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_version.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.948 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_vfio.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.948 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/linux/include/rte_os.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.948 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.948 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.948 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.948 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.948 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_c11_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.948 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_generic_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.948 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.948 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.948 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.948 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.948 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_zc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.948 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.948 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.948 Installing /home/vagrant/spdk_repo/dpdk/lib/rcu/rte_rcu_qsbr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.948 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.948 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.948 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.948 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.948 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.948 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_ptype.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.948 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_pool_ops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.948 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_dyn.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.948 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ip.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.948 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_tcp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.948 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_udp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.948 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_esp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.948 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_sctp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.948 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_icmp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.948 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_arp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.948 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ether.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.948 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_macsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.948 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_vxlan.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.948 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gre.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.948 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gtp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.948 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.948 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.948 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_mpls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.948 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_higig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.948 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ecpri.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.948 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_geneve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.948 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_l2tpv2.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.948 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ppp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.948 Installing /home/vagrant/spdk_repo/dpdk/lib/meter/rte_meter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.948 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_cman.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.948 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.948 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.948 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.948 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_dev_info.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.948 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.948 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.948 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.948 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.948 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.948 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.948 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.948 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_eth_ctrl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.948 Installing /home/vagrant/spdk_repo/dpdk/lib/pci/rte_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.948 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.948 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.948 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_num.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.948 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_ipaddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.948 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_etheraddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.948 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_string.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.948 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_rdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.948 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_vt100.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.948 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_socket.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.948 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_cirbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.948 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_portlist.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.948 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.948 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.948 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_fbk_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.948 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.948 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.948 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_jhash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.948 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.948 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.948 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.948 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.948 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_sw.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.948 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.948 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_x86_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.948 Installing /home/vagrant/spdk_repo/dpdk/lib/timer/rte_timer.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.948 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.948 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl_osdep.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.948 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.948 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.948 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_op.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.948 Installing /home/vagrant/spdk_repo/dpdk/lib/bitratestats/rte_bitrate.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.948 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/bpf_def.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.949 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.949 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.949 Installing /home/vagrant/spdk_repo/dpdk/lib/cfgfile/rte_cfgfile.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.949 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_compressdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.949 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_comp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.949 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.949 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.949 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.949 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.949 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_sym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.949 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_asym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.949 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.949 Installing /home/vagrant/spdk_repo/dpdk/lib/distributor/rte_distributor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.949 Installing /home/vagrant/spdk_repo/dpdk/lib/efd/rte_efd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.949 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_crypto_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.949 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_rx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.949 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_tx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.949 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.949 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_timer_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.949 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.949 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.949 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.949 Installing /home/vagrant/spdk_repo/dpdk/lib/gpudev/rte_gpudev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.949 Installing /home/vagrant/spdk_repo/dpdk/lib/gro/rte_gro.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.949 Installing /home/vagrant/spdk_repo/dpdk/lib/gso/rte_gso.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.949 Installing /home/vagrant/spdk_repo/dpdk/lib/ip_frag/rte_ip_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.949 Installing /home/vagrant/spdk_repo/dpdk/lib/jobstats/rte_jobstats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.949 Installing /home/vagrant/spdk_repo/dpdk/lib/latencystats/rte_latencystats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.949 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.949 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.949 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.949 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.949 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_scalar.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.949 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.949 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.949 Installing /home/vagrant/spdk_repo/dpdk/lib/member/rte_member.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.949 Installing /home/vagrant/spdk_repo/dpdk/lib/pcapng/rte_pcapng.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.949 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.949 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_empty_poll.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.949 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_intel_uncore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.949 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_pmd_mgmt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.949 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_guest_channel.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.949 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.949 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.949 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.949 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.949 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.949 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.949 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.949 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.949 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.949 Installing /home/vagrant/spdk_repo/dpdk/lib/reorder/rte_reorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.949 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_approx.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.949 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_red.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.949 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.949 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.949 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_pie.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.949 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.949 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.949 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.949 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_std.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.949 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.949 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.949 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_c11.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.949 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_stubs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.949 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vdpa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.949 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.949 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_async.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.949 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.949 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.949 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.949 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sad.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.949 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_group.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.949 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.949 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.949 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.949 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.949 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.949 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ras.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.949 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.949 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.949 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.949 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.949 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sym_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.949 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.949 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.949 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.949 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.949 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.949 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.949 Installing /home/vagrant/spdk_repo/dpdk/lib/pdump/rte_pdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.949 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.949 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.949 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.949 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_em.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.949 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_learner.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.949 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_selector.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.949 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_wm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.949 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.949 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.949 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_array.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.949 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.949 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_cuckoo.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.949 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.949 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.949 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm_ipv6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.949 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_stub.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.949 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.949 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.949 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.949 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.949 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_port_in_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.949 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_table_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.949 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.949 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_extern.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.949 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_ctl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.949 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.949 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_worker.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.949 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_ip4_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.949 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_eth_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.949 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/pci/rte_bus_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.949 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/vdev/rte_bus_vdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.949 Installing /home/vagrant/spdk_repo/dpdk/drivers/net/i40e/rte_pmd_i40e.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.949 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-devbind.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:38.949 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-pmdinfo.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:38.949 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-telemetry.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:38.949 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-hugepages.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:38.949 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/rte_build_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.949 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk-libs.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:02:38.949 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:02:38.949 Installing symlink pointing to librte_kvargs.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so.23 00:02:38.949 Installing symlink pointing to librte_kvargs.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so 00:02:38.949 Installing symlink pointing to librte_telemetry.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so.23 00:02:38.949 Installing symlink pointing to librte_telemetry.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so 00:02:38.949 Installing symlink pointing to librte_eal.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so.23 00:02:38.949 Installing symlink pointing to librte_eal.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so 00:02:38.949 Installing symlink pointing to librte_ring.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so.23 00:02:38.949 Installing symlink pointing to librte_ring.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so 00:02:38.949 Installing symlink pointing to librte_rcu.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so.23 00:02:38.949 Installing symlink pointing to librte_rcu.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so 00:02:38.949 Installing symlink pointing to librte_mempool.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so.23 00:02:38.949 Installing symlink pointing to librte_mempool.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so 00:02:38.949 Installing symlink pointing to librte_mbuf.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so.23 00:02:38.949 Installing symlink pointing to librte_mbuf.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so 00:02:38.949 Installing symlink pointing to librte_net.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so.23 00:02:38.949 Installing symlink pointing to librte_net.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so 00:02:38.949 Installing symlink pointing to librte_meter.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so.23 00:02:38.949 Installing symlink pointing to librte_meter.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so 00:02:38.949 Installing symlink pointing to librte_ethdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so.23 00:02:38.949 Installing symlink pointing to librte_ethdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so 00:02:38.949 Installing symlink pointing to librte_pci.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so.23 00:02:38.949 Installing symlink pointing to librte_pci.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so 00:02:38.949 Installing symlink pointing to librte_cmdline.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so.23 00:02:38.949 Installing symlink pointing to librte_cmdline.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so 00:02:38.949 Installing symlink pointing to librte_metrics.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so.23 00:02:38.949 Installing symlink pointing to librte_metrics.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so 00:02:38.949 Installing symlink pointing to librte_hash.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so.23 00:02:38.949 Installing symlink pointing to librte_hash.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so 00:02:38.949 Installing symlink pointing to librte_timer.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so.23 00:02:38.949 Installing symlink pointing to librte_timer.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so 00:02:38.949 Installing symlink pointing to librte_acl.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so.23 00:02:38.949 Installing symlink pointing to librte_acl.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so 00:02:38.949 Installing symlink pointing to librte_bbdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so.23 00:02:38.949 Installing symlink pointing to librte_bbdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so 00:02:38.949 Installing symlink pointing to librte_bitratestats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so.23 00:02:38.949 Installing symlink pointing to librte_bitratestats.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so 00:02:38.949 Installing symlink pointing to librte_bpf.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so.23 00:02:38.949 Installing symlink pointing to librte_bpf.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so 00:02:38.949 Installing symlink pointing to librte_cfgfile.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so.23 00:02:38.949 Installing symlink pointing to librte_cfgfile.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so 00:02:38.949 Installing symlink pointing to librte_compressdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so.23 00:02:38.949 Installing symlink pointing to librte_compressdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so 00:02:38.949 Installing symlink pointing to librte_cryptodev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so.23 00:02:38.949 Installing symlink pointing to librte_cryptodev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so 00:02:38.949 Installing symlink pointing to librte_distributor.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so.23 00:02:38.949 Installing symlink pointing to librte_distributor.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so 00:02:38.949 Installing symlink pointing to librte_efd.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so.23 00:02:38.949 Installing symlink pointing to librte_efd.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so 00:02:38.949 Installing symlink pointing to librte_eventdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so.23 00:02:38.949 Installing symlink pointing to librte_eventdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so 00:02:38.949 Installing symlink pointing to librte_gpudev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so.23 00:02:38.949 './librte_bus_pci.so' -> 'dpdk/pmds-23.0/librte_bus_pci.so' 00:02:38.949 './librte_bus_pci.so.23' -> 'dpdk/pmds-23.0/librte_bus_pci.so.23' 00:02:38.949 './librte_bus_pci.so.23.0' -> 'dpdk/pmds-23.0/librte_bus_pci.so.23.0' 00:02:38.949 './librte_bus_vdev.so' -> 'dpdk/pmds-23.0/librte_bus_vdev.so' 00:02:38.949 './librte_bus_vdev.so.23' -> 'dpdk/pmds-23.0/librte_bus_vdev.so.23' 00:02:38.949 './librte_bus_vdev.so.23.0' -> 'dpdk/pmds-23.0/librte_bus_vdev.so.23.0' 00:02:38.949 './librte_mempool_ring.so' -> 'dpdk/pmds-23.0/librte_mempool_ring.so' 00:02:38.950 './librte_mempool_ring.so.23' -> 'dpdk/pmds-23.0/librte_mempool_ring.so.23' 00:02:38.950 './librte_mempool_ring.so.23.0' -> 'dpdk/pmds-23.0/librte_mempool_ring.so.23.0' 00:02:38.950 './librte_net_i40e.so' -> 'dpdk/pmds-23.0/librte_net_i40e.so' 00:02:38.950 './librte_net_i40e.so.23' -> 'dpdk/pmds-23.0/librte_net_i40e.so.23' 00:02:38.950 './librte_net_i40e.so.23.0' -> 'dpdk/pmds-23.0/librte_net_i40e.so.23.0' 00:02:38.950 Installing symlink pointing to librte_gpudev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so 00:02:38.950 Installing symlink pointing to librte_gro.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so.23 00:02:38.950 Installing symlink pointing to librte_gro.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so 00:02:38.950 Installing symlink pointing to librte_gso.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so.23 00:02:38.950 Installing symlink pointing to librte_gso.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so 00:02:38.950 Installing symlink pointing to librte_ip_frag.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so.23 00:02:38.950 Installing symlink pointing to librte_ip_frag.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so 00:02:38.950 Installing symlink pointing to librte_jobstats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so.23 00:02:38.950 Installing symlink pointing to librte_jobstats.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so 00:02:38.950 Installing symlink pointing to librte_latencystats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so.23 00:02:38.950 Installing symlink pointing to librte_latencystats.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so 00:02:38.950 Installing symlink pointing to librte_lpm.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so.23 00:02:38.950 Installing symlink pointing to librte_lpm.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so 00:02:38.950 Installing symlink pointing to librte_member.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so.23 00:02:38.950 Installing symlink pointing to librte_member.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so 00:02:38.950 Installing symlink pointing to librte_pcapng.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so.23 00:02:38.950 Installing symlink pointing to librte_pcapng.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so 00:02:38.950 Installing symlink pointing to librte_power.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so.23 00:02:38.950 Installing symlink pointing to librte_power.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so 00:02:38.950 Installing symlink pointing to librte_rawdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so.23 00:02:38.950 Installing symlink pointing to librte_rawdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so 00:02:38.950 Installing symlink pointing to librte_regexdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so.23 00:02:38.950 Installing symlink pointing to librte_regexdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so 00:02:38.950 Installing symlink pointing to librte_dmadev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so.23 00:02:38.950 Installing symlink pointing to librte_dmadev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so 00:02:38.950 Installing symlink pointing to librte_rib.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so.23 00:02:38.950 Installing symlink pointing to librte_rib.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so 00:02:38.950 Installing symlink pointing to librte_reorder.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so.23 00:02:38.950 Installing symlink pointing to librte_reorder.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so 00:02:38.950 Installing symlink pointing to librte_sched.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so.23 00:02:38.950 Installing symlink pointing to librte_sched.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so 00:02:38.950 Installing symlink pointing to librte_security.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so.23 00:02:38.950 Installing symlink pointing to librte_security.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so 00:02:38.950 Installing symlink pointing to librte_stack.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so.23 00:02:38.950 Installing symlink pointing to librte_stack.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so 00:02:38.950 Installing symlink pointing to librte_vhost.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so.23 00:02:38.950 Installing symlink pointing to librte_vhost.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so 00:02:38.950 Installing symlink pointing to librte_ipsec.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so.23 00:02:38.950 Installing symlink pointing to librte_ipsec.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so 00:02:38.950 Installing symlink pointing to librte_fib.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so.23 00:02:38.950 Installing symlink pointing to librte_fib.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so 00:02:38.950 Installing symlink pointing to librte_port.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so.23 00:02:38.950 Installing symlink pointing to librte_port.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so 00:02:38.950 Installing symlink pointing to librte_pdump.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so.23 00:02:38.950 Installing symlink pointing to librte_pdump.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so 00:02:38.950 Installing symlink pointing to librte_table.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so.23 00:02:38.950 Installing symlink pointing to librte_table.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so 00:02:38.950 Installing symlink pointing to librte_pipeline.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so.23 00:02:38.950 Installing symlink pointing to librte_pipeline.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so 00:02:38.950 Installing symlink pointing to librte_graph.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so.23 00:02:38.950 Installing symlink pointing to librte_graph.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so 00:02:38.950 Installing symlink pointing to librte_node.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so.23 00:02:38.950 Installing symlink pointing to librte_node.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so 00:02:38.950 Installing symlink pointing to librte_bus_pci.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so.23 00:02:38.950 Installing symlink pointing to librte_bus_pci.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so 00:02:38.950 Installing symlink pointing to librte_bus_vdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so.23 00:02:38.950 Installing symlink pointing to librte_bus_vdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so 00:02:38.950 Installing symlink pointing to librte_mempool_ring.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so.23 00:02:38.950 Installing symlink pointing to librte_mempool_ring.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so 00:02:38.950 Installing symlink pointing to librte_net_i40e.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so.23 00:02:38.950 Installing symlink pointing to librte_net_i40e.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so 00:02:38.950 Running custom install script '/bin/sh /home/vagrant/spdk_repo/dpdk/config/../buildtools/symlink-drivers-solibs.sh lib dpdk/pmds-23.0' 00:02:38.950 00:06:53 build_native_dpdk -- common/autobuild_common.sh@189 -- $ uname -s 00:02:38.950 00:06:53 build_native_dpdk -- common/autobuild_common.sh@189 -- $ [[ Linux == \F\r\e\e\B\S\D ]] 00:02:38.950 00:06:53 build_native_dpdk -- common/autobuild_common.sh@200 -- $ cat 00:02:38.950 00:06:53 build_native_dpdk -- common/autobuild_common.sh@205 -- $ cd /home/vagrant/spdk_repo/spdk 00:02:38.950 00:02:38.950 real 0m42.336s 00:02:38.950 user 4m11.230s 00:02:38.950 sys 0m57.032s 00:02:38.950 00:06:53 build_native_dpdk -- common/autotest_common.sh@1122 -- $ xtrace_disable 00:02:38.950 ************************************ 00:02:38.950 END TEST build_native_dpdk 00:02:38.950 ************************************ 00:02:38.950 00:06:53 build_native_dpdk -- common/autotest_common.sh@10 -- $ set +x 00:02:38.950 00:06:53 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:02:38.950 00:06:53 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:02:38.950 00:06:53 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:02:38.950 00:06:53 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:02:38.950 00:06:53 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:02:38.950 00:06:53 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:02:38.950 00:06:53 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:02:38.950 00:06:53 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --with-xnvme --with-shared 00:02:39.207 Using /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig for additional libs... 00:02:39.466 DPDK libraries: /home/vagrant/spdk_repo/dpdk/build/lib 00:02:39.466 DPDK includes: //home/vagrant/spdk_repo/dpdk/build/include 00:02:39.466 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:02:39.724 Using 'verbs' RDMA provider 00:02:56.024 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:03:14.098 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:03:14.098 Creating mk/config.mk...done. 00:03:14.098 Creating mk/cc.flags.mk...done. 00:03:14.098 Type 'make' to build. 00:03:14.098 00:07:26 -- spdk/autobuild.sh@69 -- $ run_test make make -j10 00:03:14.098 00:07:26 -- common/autotest_common.sh@1097 -- $ '[' 3 -le 1 ']' 00:03:14.098 00:07:26 -- common/autotest_common.sh@1103 -- $ xtrace_disable 00:03:14.098 00:07:26 -- common/autotest_common.sh@10 -- $ set +x 00:03:14.098 ************************************ 00:03:14.098 START TEST make 00:03:14.098 ************************************ 00:03:14.098 00:07:26 make -- common/autotest_common.sh@1121 -- $ make -j10 00:03:14.098 (cd /home/vagrant/spdk_repo/spdk/xnvme && \ 00:03:14.098 export PKG_CONFIG_PATH=$PKG_CONFIG_PATH:/usr/lib/pkgconfig:/usr/lib64/pkgconfig && \ 00:03:14.098 meson setup builddir \ 00:03:14.098 -Dwith-libaio=enabled \ 00:03:14.098 -Dwith-liburing=enabled \ 00:03:14.098 -Dwith-libvfn=disabled \ 00:03:14.098 -Dwith-spdk=false && \ 00:03:14.098 meson compile -C builddir && \ 00:03:14.098 cd -) 00:03:14.098 make[1]: Nothing to be done for 'all'. 00:03:14.356 The Meson build system 00:03:14.356 Version: 1.3.1 00:03:14.356 Source dir: /home/vagrant/spdk_repo/spdk/xnvme 00:03:14.356 Build dir: /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:03:14.356 Build type: native build 00:03:14.356 Project name: xnvme 00:03:14.356 Project version: 0.7.3 00:03:14.356 C compiler for the host machine: gcc (gcc 13.2.1 "gcc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:03:14.356 C linker for the host machine: gcc ld.bfd 2.39-16 00:03:14.356 Host machine cpu family: x86_64 00:03:14.356 Host machine cpu: x86_64 00:03:14.356 Message: host_machine.system: linux 00:03:14.356 Compiler for C supports arguments -Wno-missing-braces: YES 00:03:14.356 Compiler for C supports arguments -Wno-cast-function-type: YES 00:03:14.356 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:03:14.356 Run-time dependency threads found: YES 00:03:14.356 Has header "setupapi.h" : NO 00:03:14.356 Has header "linux/blkzoned.h" : YES 00:03:14.356 Has header "linux/blkzoned.h" : YES (cached) 00:03:14.356 Has header "libaio.h" : YES 00:03:14.356 Library aio found: YES 00:03:14.356 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:03:14.356 Run-time dependency liburing found: YES 2.2 00:03:14.356 Dependency libvfn skipped: feature with-libvfn disabled 00:03:14.356 Run-time dependency appleframeworks found: NO (tried framework) 00:03:14.356 Run-time dependency appleframeworks found: NO (tried framework) 00:03:14.356 Configuring xnvme_config.h using configuration 00:03:14.356 Configuring xnvme.spec using configuration 00:03:14.356 Run-time dependency bash-completion found: YES 2.11 00:03:14.356 Message: Bash-completions: /usr/share/bash-completion/completions 00:03:14.356 Program cp found: YES (/usr/bin/cp) 00:03:14.356 Has header "winsock2.h" : NO 00:03:14.356 Has header "dbghelp.h" : NO 00:03:14.356 Library rpcrt4 found: NO 00:03:14.356 Library rt found: YES 00:03:14.356 Checking for function "clock_gettime" with dependency -lrt: YES 00:03:14.356 Found CMake: /usr/bin/cmake (3.27.7) 00:03:14.356 Run-time dependency _spdk found: NO (tried pkgconfig and cmake) 00:03:14.356 Run-time dependency wpdk found: NO (tried pkgconfig and cmake) 00:03:14.356 Run-time dependency spdk-win found: NO (tried pkgconfig and cmake) 00:03:14.356 Build targets in project: 32 00:03:14.356 00:03:14.356 xnvme 0.7.3 00:03:14.356 00:03:14.356 User defined options 00:03:14.356 with-libaio : enabled 00:03:14.356 with-liburing: enabled 00:03:14.356 with-libvfn : disabled 00:03:14.356 with-spdk : false 00:03:14.356 00:03:14.356 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:03:14.614 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/xnvme/builddir' 00:03:14.872 [1/203] Generating toolbox/xnvme-driver-script with a custom command 00:03:14.872 [2/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd_dev.c.o 00:03:14.872 [3/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_posix.c.o 00:03:14.872 [4/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd.c.o 00:03:14.872 [5/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_admin_shim.c.o 00:03:14.872 [6/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_nil.c.o 00:03:14.872 [7/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd_async.c.o 00:03:14.872 [8/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_sync_psync.c.o 00:03:14.872 [9/203] Compiling C object lib/libxnvme.so.p/xnvme_adm.c.o 00:03:14.872 [10/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd_nvme.c.o 00:03:14.872 [11/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_mem_posix.c.o 00:03:14.872 [12/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_emu.c.o 00:03:14.872 [13/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux.c.o 00:03:14.872 [14/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_thrpool.c.o 00:03:14.872 [15/203] Compiling C object lib/libxnvme.so.p/xnvme_be.c.o 00:03:15.130 [16/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos.c.o 00:03:15.130 [17/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_hugepage.c.o 00:03:15.130 [18/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_async_libaio.c.o 00:03:15.130 [19/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos_dev.c.o 00:03:15.130 [20/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos_admin.c.o 00:03:15.130 [21/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_async_ucmd.c.o 00:03:15.130 [22/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos_sync.c.o 00:03:15.130 [23/203] Compiling C object lib/libxnvme.so.p/xnvme_be_nosys.c.o 00:03:15.130 [24/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_block.c.o 00:03:15.130 [25/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_dev.c.o 00:03:15.130 [26/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_nvme.c.o 00:03:15.130 [27/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk.c.o 00:03:15.130 [28/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_async_liburing.c.o 00:03:15.130 [29/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk_admin.c.o 00:03:15.130 [30/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_mem.c.o 00:03:15.130 [31/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk.c.o 00:03:15.130 [32/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk_sync.c.o 00:03:15.130 [33/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_dev.c.o 00:03:15.130 [34/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_admin.c.o 00:03:15.130 [35/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk_dev.c.o 00:03:15.130 [36/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_admin.c.o 00:03:15.130 [37/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_async.c.o 00:03:15.130 [38/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_sync.c.o 00:03:15.130 [39/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio.c.o 00:03:15.130 [40/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_async.c.o 00:03:15.130 [41/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_dev.c.o 00:03:15.130 [42/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_async_ioring.c.o 00:03:15.130 [43/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_async_iocp.c.o 00:03:15.130 [44/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_mem.c.o 00:03:15.130 [45/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows.c.o 00:03:15.130 [46/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_fs.c.o 00:03:15.130 [47/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_async_iocp_th.c.o 00:03:15.130 [48/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_mem.c.o 00:03:15.130 [49/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_sync.c.o 00:03:15.130 [50/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_nvme.c.o 00:03:15.130 [51/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_dev.c.o 00:03:15.130 [52/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_block.c.o 00:03:15.388 [53/203] Compiling C object lib/libxnvme.so.p/xnvme_libconf_entries.c.o 00:03:15.388 [54/203] Compiling C object lib/libxnvme.so.p/xnvme_file.c.o 00:03:15.388 [55/203] Compiling C object lib/libxnvme.so.p/xnvme_geo.c.o 00:03:15.388 [56/203] Compiling C object lib/libxnvme.so.p/xnvme_lba.c.o 00:03:15.388 [57/203] Compiling C object lib/libxnvme.so.p/xnvme_req.c.o 00:03:15.388 [58/203] Compiling C object lib/libxnvme.so.p/xnvme_cmd.c.o 00:03:15.388 [59/203] Compiling C object lib/libxnvme.so.p/xnvme_ident.c.o 00:03:15.388 [60/203] Compiling C object lib/libxnvme.so.p/xnvme_dev.c.o 00:03:15.388 [61/203] Compiling C object lib/libxnvme.so.p/xnvme_libconf.c.o 00:03:15.388 [62/203] Compiling C object lib/libxnvme.so.p/xnvme_buf.c.o 00:03:15.388 [63/203] Compiling C object lib/libxnvme.so.p/xnvme_kvs.c.o 00:03:15.388 [64/203] Compiling C object lib/libxnvme.so.p/xnvme_opts.c.o 00:03:15.388 [65/203] Compiling C object lib/libxnvme.so.p/xnvme_nvm.c.o 00:03:15.388 [66/203] Compiling C object lib/libxnvme.so.p/xnvme_topology.c.o 00:03:15.388 [67/203] Compiling C object lib/libxnvme.a.p/xnvme_adm.c.o 00:03:15.388 [68/203] Compiling C object lib/libxnvme.so.p/xnvme_queue.c.o 00:03:15.388 [69/203] Compiling C object lib/libxnvme.so.p/xnvme_ver.c.o 00:03:15.388 [70/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_admin_shim.c.o 00:03:15.646 [71/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_emu.c.o 00:03:15.646 [72/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_nil.c.o 00:03:15.646 [73/203] Compiling C object lib/libxnvme.so.p/xnvme_spec_pp.c.o 00:03:15.646 [74/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_mem_posix.c.o 00:03:15.646 [75/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd.c.o 00:03:15.646 [76/203] Compiling C object lib/libxnvme.so.p/xnvme_znd.c.o 00:03:15.646 [77/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd_async.c.o 00:03:15.646 [78/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd_dev.c.o 00:03:15.646 [79/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd_nvme.c.o 00:03:15.646 [80/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_posix.c.o 00:03:15.646 [81/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_sync_psync.c.o 00:03:15.646 [82/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux.c.o 00:03:15.646 [83/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_thrpool.c.o 00:03:15.646 [84/203] Compiling C object lib/libxnvme.so.p/xnvme_cli.c.o 00:03:15.646 [85/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos.c.o 00:03:15.646 [86/203] Compiling C object lib/libxnvme.a.p/xnvme_be.c.o 00:03:15.904 [87/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_async_libaio.c.o 00:03:15.904 [88/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_hugepage.c.o 00:03:15.904 [89/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos_admin.c.o 00:03:15.904 [90/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos_dev.c.o 00:03:15.904 [91/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_dev.c.o 00:03:15.904 [92/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_nvme.c.o 00:03:15.904 [93/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_async_liburing.c.o 00:03:15.904 [94/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_block.c.o 00:03:15.904 [95/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos_sync.c.o 00:03:15.904 [96/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_async_ucmd.c.o 00:03:15.904 [97/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk.c.o 00:03:15.904 [98/203] Compiling C object lib/libxnvme.a.p/xnvme_be_nosys.c.o 00:03:15.904 [99/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_async.c.o 00:03:15.904 [100/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk.c.o 00:03:15.904 [101/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk_admin.c.o 00:03:15.904 [102/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_admin.c.o 00:03:15.904 [103/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk_dev.c.o 00:03:15.904 [104/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_sync.c.o 00:03:15.904 [105/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_dev.c.o 00:03:15.904 [106/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_mem.c.o 00:03:15.904 [107/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_admin.c.o 00:03:15.904 [108/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk_sync.c.o 00:03:15.904 [109/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio.c.o 00:03:15.904 [110/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_async.c.o 00:03:15.904 [111/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_mem.c.o 00:03:15.904 [112/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_sync.c.o 00:03:15.904 [113/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_dev.c.o 00:03:15.904 [114/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_async_ioring.c.o 00:03:15.904 [115/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows.c.o 00:03:15.904 [116/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_async_iocp_th.c.o 00:03:15.904 [117/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_fs.c.o 00:03:15.904 [118/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_block.c.o 00:03:15.904 [119/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_async_iocp.c.o 00:03:16.169 [120/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_mem.c.o 00:03:16.170 [121/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_dev.c.o 00:03:16.170 [122/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_nvme.c.o 00:03:16.170 [123/203] Compiling C object lib/libxnvme.a.p/xnvme_cmd.c.o 00:03:16.170 [124/203] Compiling C object lib/libxnvme.a.p/xnvme_libconf_entries.c.o 00:03:16.170 [125/203] Compiling C object lib/libxnvme.so.p/xnvme_spec.c.o 00:03:16.170 [126/203] Compiling C object lib/libxnvme.a.p/xnvme_ident.c.o 00:03:16.170 [127/203] Compiling C object lib/libxnvme.a.p/xnvme_file.c.o 00:03:16.170 [128/203] Compiling C object lib/libxnvme.a.p/xnvme_geo.c.o 00:03:16.170 [129/203] Linking target lib/libxnvme.so 00:03:16.170 [130/203] Compiling C object lib/libxnvme.a.p/xnvme_lba.c.o 00:03:16.170 [131/203] Compiling C object lib/libxnvme.a.p/xnvme_libconf.c.o 00:03:16.170 [132/203] Compiling C object lib/libxnvme.a.p/xnvme_dev.c.o 00:03:16.170 [133/203] Compiling C object lib/libxnvme.a.p/xnvme_req.c.o 00:03:16.170 [134/203] Compiling C object lib/libxnvme.a.p/xnvme_opts.c.o 00:03:16.170 [135/203] Compiling C object lib/libxnvme.a.p/xnvme_buf.c.o 00:03:16.170 [136/203] Compiling C object lib/libxnvme.a.p/xnvme_kvs.c.o 00:03:16.170 [137/203] Compiling C object lib/libxnvme.a.p/xnvme_topology.c.o 00:03:16.170 [138/203] Compiling C object lib/libxnvme.a.p/xnvme_nvm.c.o 00:03:16.170 [139/203] Compiling C object lib/libxnvme.a.p/xnvme_queue.c.o 00:03:16.170 [140/203] Compiling C object lib/libxnvme.a.p/xnvme_ver.c.o 00:03:16.427 [141/203] Compiling C object tests/xnvme_tests_cli.p/cli.c.o 00:03:16.427 [142/203] Compiling C object tests/xnvme_tests_async_intf.p/async_intf.c.o 00:03:16.427 [143/203] Compiling C object tests/xnvme_tests_buf.p/buf.c.o 00:03:16.427 [144/203] Compiling C object lib/libxnvme.a.p/xnvme_spec_pp.c.o 00:03:16.427 [145/203] Compiling C object tests/xnvme_tests_xnvme_cli.p/xnvme_cli.c.o 00:03:16.427 [146/203] Compiling C object tests/xnvme_tests_xnvme_file.p/xnvme_file.c.o 00:03:16.427 [147/203] Compiling C object tests/xnvme_tests_znd_append.p/znd_append.c.o 00:03:16.427 [148/203] Compiling C object tests/xnvme_tests_enum.p/enum.c.o 00:03:16.427 [149/203] Compiling C object tests/xnvme_tests_scc.p/scc.c.o 00:03:16.427 [150/203] Compiling C object lib/libxnvme.a.p/xnvme_znd.c.o 00:03:16.427 [151/203] Compiling C object tests/xnvme_tests_znd_explicit_open.p/znd_explicit_open.c.o 00:03:16.427 [152/203] Compiling C object lib/libxnvme.a.p/xnvme_cli.c.o 00:03:16.427 [153/203] Compiling C object tests/xnvme_tests_znd_state.p/znd_state.c.o 00:03:16.427 [154/203] Compiling C object tests/xnvme_tests_kvs.p/kvs.c.o 00:03:16.685 [155/203] Compiling C object tests/xnvme_tests_ioworker.p/ioworker.c.o 00:03:16.685 [156/203] Compiling C object tests/xnvme_tests_map.p/map.c.o 00:03:16.685 [157/203] Compiling C object tests/xnvme_tests_lblk.p/lblk.c.o 00:03:16.685 [158/203] Compiling C object tests/xnvme_tests_znd_zrwa.p/znd_zrwa.c.o 00:03:16.685 [159/203] Compiling C object tools/xdd.p/xdd.c.o 00:03:16.685 [160/203] Compiling C object examples/xnvme_dev.p/xnvme_dev.c.o 00:03:16.685 [161/203] Compiling C object examples/xnvme_hello.p/xnvme_hello.c.o 00:03:16.685 [162/203] Compiling C object tools/lblk.p/lblk.c.o 00:03:16.685 [163/203] Compiling C object examples/xnvme_single_async.p/xnvme_single_async.c.o 00:03:16.685 [164/203] Compiling C object tools/zoned.p/zoned.c.o 00:03:16.685 [165/203] Compiling C object examples/xnvme_enum.p/xnvme_enum.c.o 00:03:16.685 [166/203] Compiling C object tools/kvs.p/kvs.c.o 00:03:16.685 [167/203] Compiling C object examples/xnvme_single_sync.p/xnvme_single_sync.c.o 00:03:16.685 [168/203] Compiling C object examples/xnvme_io_async.p/xnvme_io_async.c.o 00:03:16.685 [169/203] Compiling C object examples/zoned_io_async.p/zoned_io_async.c.o 00:03:16.942 [170/203] Compiling C object tools/xnvme_file.p/xnvme_file.c.o 00:03:16.942 [171/203] Compiling C object examples/zoned_io_sync.p/zoned_io_sync.c.o 00:03:16.942 [172/203] Compiling C object tools/xnvme.p/xnvme.c.o 00:03:16.942 [173/203] Compiling C object lib/libxnvme.a.p/xnvme_spec.c.o 00:03:16.942 [174/203] Linking static target lib/libxnvme.a 00:03:16.942 [175/203] Linking target tests/xnvme_tests_buf 00:03:16.942 [176/203] Linking target tests/xnvme_tests_lblk 00:03:16.942 [177/203] Linking target tests/xnvme_tests_scc 00:03:16.943 [178/203] Linking target tests/xnvme_tests_async_intf 00:03:16.943 [179/203] Linking target tests/xnvme_tests_ioworker 00:03:16.943 [180/203] Linking target tests/xnvme_tests_enum 00:03:16.943 [181/203] Linking target tests/xnvme_tests_xnvme_cli 00:03:16.943 [182/203] Linking target tests/xnvme_tests_xnvme_file 00:03:16.943 [183/203] Linking target tests/xnvme_tests_znd_append 00:03:16.943 [184/203] Linking target tests/xnvme_tests_cli 00:03:16.943 [185/203] Linking target tests/xnvme_tests_znd_state 00:03:16.943 [186/203] Linking target tests/xnvme_tests_znd_explicit_open 00:03:16.943 [187/203] Linking target tools/xdd 00:03:16.943 [188/203] Linking target tools/xnvme_file 00:03:16.943 [189/203] Linking target tests/xnvme_tests_kvs 00:03:16.943 [190/203] Linking target tools/xnvme 00:03:16.943 [191/203] Linking target tools/lblk 00:03:16.943 [192/203] Linking target tests/xnvme_tests_znd_zrwa 00:03:16.943 [193/203] Linking target tests/xnvme_tests_map 00:03:16.943 [194/203] Linking target tools/zoned 00:03:16.943 [195/203] Linking target examples/xnvme_dev 00:03:16.943 [196/203] Linking target tools/kvs 00:03:16.943 [197/203] Linking target examples/xnvme_io_async 00:03:16.943 [198/203] Linking target examples/xnvme_enum 00:03:16.943 [199/203] Linking target examples/xnvme_hello 00:03:16.943 [200/203] Linking target examples/zoned_io_async 00:03:16.943 [201/203] Linking target examples/xnvme_single_async 00:03:16.943 [202/203] Linking target examples/xnvme_single_sync 00:03:17.200 [203/203] Linking target examples/zoned_io_sync 00:03:17.200 INFO: autodetecting backend as ninja 00:03:17.200 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:03:17.200 /home/vagrant/spdk_repo/spdk/xnvmebuild 00:03:35.299 CC lib/ut/ut.o 00:03:35.299 CC lib/ut_mock/mock.o 00:03:35.299 CC lib/log/log_flags.o 00:03:35.299 CC lib/log/log.o 00:03:35.299 CC lib/log/log_deprecated.o 00:03:35.299 LIB libspdk_ut.a 00:03:35.299 LIB libspdk_ut_mock.a 00:03:35.299 SO libspdk_ut.so.2.0 00:03:35.299 LIB libspdk_log.a 00:03:35.299 SO libspdk_ut_mock.so.6.0 00:03:35.299 SYMLINK libspdk_ut.so 00:03:35.299 SO libspdk_log.so.7.0 00:03:35.299 SYMLINK libspdk_ut_mock.so 00:03:35.299 SYMLINK libspdk_log.so 00:03:35.299 CXX lib/trace_parser/trace.o 00:03:35.299 CC lib/dma/dma.o 00:03:35.299 CC lib/util/base64.o 00:03:35.299 CC lib/util/bit_array.o 00:03:35.299 CC lib/util/cpuset.o 00:03:35.299 CC lib/util/crc16.o 00:03:35.299 CC lib/util/crc32c.o 00:03:35.299 CC lib/ioat/ioat.o 00:03:35.299 CC lib/util/crc32.o 00:03:35.299 CC lib/vfio_user/host/vfio_user_pci.o 00:03:35.299 CC lib/vfio_user/host/vfio_user.o 00:03:35.299 CC lib/util/crc32_ieee.o 00:03:35.299 CC lib/util/crc64.o 00:03:35.299 CC lib/util/dif.o 00:03:35.299 LIB libspdk_dma.a 00:03:35.299 CC lib/util/fd.o 00:03:35.299 SO libspdk_dma.so.4.0 00:03:35.299 CC lib/util/file.o 00:03:35.299 CC lib/util/hexlify.o 00:03:35.299 SYMLINK libspdk_dma.so 00:03:35.299 CC lib/util/iov.o 00:03:35.299 CC lib/util/math.o 00:03:35.299 LIB libspdk_ioat.a 00:03:35.299 CC lib/util/pipe.o 00:03:35.299 CC lib/util/strerror_tls.o 00:03:35.299 SO libspdk_ioat.so.7.0 00:03:35.299 LIB libspdk_vfio_user.a 00:03:35.299 CC lib/util/string.o 00:03:35.299 SYMLINK libspdk_ioat.so 00:03:35.299 CC lib/util/uuid.o 00:03:35.299 CC lib/util/fd_group.o 00:03:35.299 SO libspdk_vfio_user.so.5.0 00:03:35.299 CC lib/util/xor.o 00:03:35.299 CC lib/util/zipf.o 00:03:35.299 SYMLINK libspdk_vfio_user.so 00:03:35.299 LIB libspdk_util.a 00:03:35.299 SO libspdk_util.so.9.0 00:03:35.299 LIB libspdk_trace_parser.a 00:03:35.299 SO libspdk_trace_parser.so.5.0 00:03:35.299 SYMLINK libspdk_util.so 00:03:35.299 SYMLINK libspdk_trace_parser.so 00:03:35.299 CC lib/conf/conf.o 00:03:35.299 CC lib/rdma/common.o 00:03:35.299 CC lib/rdma/rdma_verbs.o 00:03:35.299 CC lib/env_dpdk/env.o 00:03:35.299 CC lib/env_dpdk/init.o 00:03:35.299 CC lib/env_dpdk/pci.o 00:03:35.299 CC lib/env_dpdk/memory.o 00:03:35.299 CC lib/vmd/vmd.o 00:03:35.299 CC lib/idxd/idxd.o 00:03:35.299 CC lib/json/json_parse.o 00:03:35.299 CC lib/vmd/led.o 00:03:35.299 LIB libspdk_conf.a 00:03:35.299 SO libspdk_conf.so.6.0 00:03:35.299 CC lib/json/json_util.o 00:03:35.299 LIB libspdk_rdma.a 00:03:35.299 SYMLINK libspdk_conf.so 00:03:35.299 CC lib/json/json_write.o 00:03:35.299 SO libspdk_rdma.so.6.0 00:03:35.299 CC lib/idxd/idxd_user.o 00:03:35.299 CC lib/idxd/idxd_kernel.o 00:03:35.299 CC lib/env_dpdk/threads.o 00:03:35.299 CC lib/env_dpdk/pci_ioat.o 00:03:35.299 SYMLINK libspdk_rdma.so 00:03:35.299 CC lib/env_dpdk/pci_virtio.o 00:03:35.299 CC lib/env_dpdk/pci_vmd.o 00:03:35.299 CC lib/env_dpdk/pci_idxd.o 00:03:35.299 CC lib/env_dpdk/pci_event.o 00:03:35.299 CC lib/env_dpdk/sigbus_handler.o 00:03:35.299 CC lib/env_dpdk/pci_dpdk.o 00:03:35.299 CC lib/env_dpdk/pci_dpdk_2207.o 00:03:35.299 LIB libspdk_json.a 00:03:35.299 CC lib/env_dpdk/pci_dpdk_2211.o 00:03:35.299 SO libspdk_json.so.6.0 00:03:35.299 LIB libspdk_idxd.a 00:03:35.299 SYMLINK libspdk_json.so 00:03:35.299 SO libspdk_idxd.so.12.0 00:03:35.299 LIB libspdk_vmd.a 00:03:35.299 SO libspdk_vmd.so.6.0 00:03:35.299 SYMLINK libspdk_idxd.so 00:03:35.558 SYMLINK libspdk_vmd.so 00:03:35.558 CC lib/jsonrpc/jsonrpc_server.o 00:03:35.558 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:03:35.558 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:03:35.558 CC lib/jsonrpc/jsonrpc_client.o 00:03:35.816 LIB libspdk_jsonrpc.a 00:03:36.076 SO libspdk_jsonrpc.so.6.0 00:03:36.076 SYMLINK libspdk_jsonrpc.so 00:03:36.076 LIB libspdk_env_dpdk.a 00:03:36.334 SO libspdk_env_dpdk.so.14.0 00:03:36.334 SYMLINK libspdk_env_dpdk.so 00:03:36.334 CC lib/rpc/rpc.o 00:03:36.594 LIB libspdk_rpc.a 00:03:36.853 SO libspdk_rpc.so.6.0 00:03:36.853 SYMLINK libspdk_rpc.so 00:03:37.111 CC lib/keyring/keyring_rpc.o 00:03:37.111 CC lib/keyring/keyring.o 00:03:37.111 CC lib/trace/trace_rpc.o 00:03:37.111 CC lib/trace/trace.o 00:03:37.111 CC lib/trace/trace_flags.o 00:03:37.111 CC lib/notify/notify.o 00:03:37.111 CC lib/notify/notify_rpc.o 00:03:37.370 LIB libspdk_notify.a 00:03:37.370 LIB libspdk_keyring.a 00:03:37.370 SO libspdk_notify.so.6.0 00:03:37.370 SO libspdk_keyring.so.1.0 00:03:37.370 LIB libspdk_trace.a 00:03:37.370 SYMLINK libspdk_notify.so 00:03:37.370 SYMLINK libspdk_keyring.so 00:03:37.630 SO libspdk_trace.so.10.0 00:03:37.630 SYMLINK libspdk_trace.so 00:03:37.889 CC lib/thread/thread.o 00:03:37.889 CC lib/thread/iobuf.o 00:03:37.889 CC lib/sock/sock.o 00:03:37.889 CC lib/sock/sock_rpc.o 00:03:38.458 LIB libspdk_sock.a 00:03:38.458 SO libspdk_sock.so.9.0 00:03:38.458 SYMLINK libspdk_sock.so 00:03:39.026 CC lib/nvme/nvme_ctrlr_cmd.o 00:03:39.026 CC lib/nvme/nvme_ctrlr.o 00:03:39.026 CC lib/nvme/nvme_fabric.o 00:03:39.026 CC lib/nvme/nvme_ns.o 00:03:39.026 CC lib/nvme/nvme_ns_cmd.o 00:03:39.026 CC lib/nvme/nvme_pcie_common.o 00:03:39.026 CC lib/nvme/nvme_pcie.o 00:03:39.026 CC lib/nvme/nvme_qpair.o 00:03:39.026 CC lib/nvme/nvme.o 00:03:39.594 LIB libspdk_thread.a 00:03:39.594 CC lib/nvme/nvme_quirks.o 00:03:39.594 CC lib/nvme/nvme_transport.o 00:03:39.594 SO libspdk_thread.so.10.0 00:03:39.594 CC lib/nvme/nvme_discovery.o 00:03:39.594 SYMLINK libspdk_thread.so 00:03:39.594 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:03:39.594 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:03:39.594 CC lib/nvme/nvme_tcp.o 00:03:39.852 CC lib/nvme/nvme_opal.o 00:03:39.852 CC lib/nvme/nvme_io_msg.o 00:03:40.111 CC lib/nvme/nvme_poll_group.o 00:03:40.111 CC lib/accel/accel.o 00:03:40.111 CC lib/accel/accel_rpc.o 00:03:40.111 CC lib/accel/accel_sw.o 00:03:40.111 CC lib/nvme/nvme_zns.o 00:03:40.370 CC lib/nvme/nvme_stubs.o 00:03:40.370 CC lib/nvme/nvme_auth.o 00:03:40.370 CC lib/nvme/nvme_cuse.o 00:03:40.370 CC lib/blob/blobstore.o 00:03:40.370 CC lib/blob/request.o 00:03:40.628 CC lib/blob/zeroes.o 00:03:40.628 CC lib/blob/blob_bs_dev.o 00:03:40.628 CC lib/nvme/nvme_rdma.o 00:03:40.887 CC lib/init/json_config.o 00:03:40.887 CC lib/virtio/virtio.o 00:03:40.887 CC lib/virtio/virtio_vhost_user.o 00:03:41.145 CC lib/init/subsystem.o 00:03:41.145 CC lib/init/subsystem_rpc.o 00:03:41.145 CC lib/virtio/virtio_vfio_user.o 00:03:41.145 LIB libspdk_accel.a 00:03:41.145 CC lib/virtio/virtio_pci.o 00:03:41.145 SO libspdk_accel.so.15.0 00:03:41.145 CC lib/init/rpc.o 00:03:41.145 SYMLINK libspdk_accel.so 00:03:41.404 LIB libspdk_init.a 00:03:41.404 SO libspdk_init.so.5.0 00:03:41.404 LIB libspdk_virtio.a 00:03:41.404 SYMLINK libspdk_init.so 00:03:41.404 SO libspdk_virtio.so.7.0 00:03:41.663 CC lib/bdev/bdev.o 00:03:41.663 CC lib/bdev/bdev_rpc.o 00:03:41.663 CC lib/bdev/bdev_zone.o 00:03:41.663 CC lib/bdev/part.o 00:03:41.663 CC lib/bdev/scsi_nvme.o 00:03:41.663 SYMLINK libspdk_virtio.so 00:03:41.663 CC lib/event/app.o 00:03:41.663 CC lib/event/log_rpc.o 00:03:41.922 CC lib/event/reactor.o 00:03:41.922 CC lib/event/app_rpc.o 00:03:41.922 CC lib/event/scheduler_static.o 00:03:42.180 LIB libspdk_nvme.a 00:03:42.180 LIB libspdk_event.a 00:03:42.180 SO libspdk_nvme.so.13.0 00:03:42.439 SO libspdk_event.so.13.0 00:03:42.439 SYMLINK libspdk_event.so 00:03:42.699 SYMLINK libspdk_nvme.so 00:03:43.637 LIB libspdk_blob.a 00:03:43.637 SO libspdk_blob.so.11.0 00:03:43.895 SYMLINK libspdk_blob.so 00:03:44.153 CC lib/blobfs/blobfs.o 00:03:44.153 CC lib/blobfs/tree.o 00:03:44.153 CC lib/lvol/lvol.o 00:03:44.153 LIB libspdk_bdev.a 00:03:44.411 SO libspdk_bdev.so.15.0 00:03:44.411 SYMLINK libspdk_bdev.so 00:03:44.670 CC lib/ublk/ublk.o 00:03:44.670 CC lib/ublk/ublk_rpc.o 00:03:44.670 CC lib/scsi/dev.o 00:03:44.670 CC lib/nvmf/ctrlr.o 00:03:44.670 CC lib/nvmf/ctrlr_discovery.o 00:03:44.670 CC lib/nbd/nbd.o 00:03:44.670 CC lib/nvmf/ctrlr_bdev.o 00:03:44.670 CC lib/ftl/ftl_core.o 00:03:44.928 CC lib/scsi/lun.o 00:03:44.928 CC lib/scsi/port.o 00:03:44.928 LIB libspdk_blobfs.a 00:03:44.928 SO libspdk_blobfs.so.10.0 00:03:45.187 SYMLINK libspdk_blobfs.so 00:03:45.187 CC lib/scsi/scsi.o 00:03:45.187 CC lib/nvmf/subsystem.o 00:03:45.187 CC lib/ftl/ftl_init.o 00:03:45.187 LIB libspdk_lvol.a 00:03:45.187 CC lib/nbd/nbd_rpc.o 00:03:45.187 SO libspdk_lvol.so.10.0 00:03:45.187 CC lib/ftl/ftl_layout.o 00:03:45.187 CC lib/nvmf/nvmf.o 00:03:45.187 SYMLINK libspdk_lvol.so 00:03:45.187 CC lib/nvmf/nvmf_rpc.o 00:03:45.187 CC lib/scsi/scsi_bdev.o 00:03:45.446 LIB libspdk_nbd.a 00:03:45.446 CC lib/scsi/scsi_pr.o 00:03:45.446 LIB libspdk_ublk.a 00:03:45.446 SO libspdk_nbd.so.7.0 00:03:45.446 SO libspdk_ublk.so.3.0 00:03:45.446 SYMLINK libspdk_nbd.so 00:03:45.446 CC lib/scsi/scsi_rpc.o 00:03:45.446 CC lib/scsi/task.o 00:03:45.446 SYMLINK libspdk_ublk.so 00:03:45.446 CC lib/nvmf/transport.o 00:03:45.446 CC lib/ftl/ftl_debug.o 00:03:45.704 CC lib/nvmf/tcp.o 00:03:45.704 CC lib/ftl/ftl_io.o 00:03:45.704 CC lib/ftl/ftl_sb.o 00:03:45.704 CC lib/ftl/ftl_l2p.o 00:03:45.704 LIB libspdk_scsi.a 00:03:45.704 SO libspdk_scsi.so.9.0 00:03:45.963 CC lib/ftl/ftl_l2p_flat.o 00:03:45.963 CC lib/ftl/ftl_nv_cache.o 00:03:45.963 SYMLINK libspdk_scsi.so 00:03:45.963 CC lib/ftl/ftl_band.o 00:03:45.963 CC lib/ftl/ftl_band_ops.o 00:03:45.963 CC lib/ftl/ftl_writer.o 00:03:45.963 CC lib/ftl/ftl_rq.o 00:03:46.222 CC lib/ftl/ftl_reloc.o 00:03:46.222 CC lib/nvmf/stubs.o 00:03:46.222 CC lib/ftl/ftl_l2p_cache.o 00:03:46.222 CC lib/nvmf/mdns_server.o 00:03:46.222 CC lib/ftl/ftl_p2l.o 00:03:46.222 CC lib/ftl/mngt/ftl_mngt.o 00:03:46.480 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:46.480 CC lib/iscsi/conn.o 00:03:46.480 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:46.480 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:46.739 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:46.739 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:46.739 CC lib/iscsi/init_grp.o 00:03:46.739 CC lib/nvmf/rdma.o 00:03:46.739 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:46.739 CC lib/vhost/vhost.o 00:03:46.739 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:46.997 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:46.997 CC lib/iscsi/iscsi.o 00:03:46.997 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:46.997 CC lib/nvmf/auth.o 00:03:46.997 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:46.997 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:46.997 CC lib/iscsi/md5.o 00:03:47.255 CC lib/iscsi/param.o 00:03:47.255 CC lib/vhost/vhost_rpc.o 00:03:47.255 CC lib/vhost/vhost_scsi.o 00:03:47.255 CC lib/vhost/vhost_blk.o 00:03:47.255 CC lib/iscsi/portal_grp.o 00:03:47.255 CC lib/vhost/rte_vhost_user.o 00:03:47.513 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:47.513 CC lib/iscsi/tgt_node.o 00:03:47.513 CC lib/ftl/utils/ftl_conf.o 00:03:47.513 CC lib/iscsi/iscsi_subsystem.o 00:03:47.771 CC lib/iscsi/iscsi_rpc.o 00:03:47.771 CC lib/ftl/utils/ftl_md.o 00:03:47.771 CC lib/iscsi/task.o 00:03:48.030 CC lib/ftl/utils/ftl_mempool.o 00:03:48.030 CC lib/ftl/utils/ftl_bitmap.o 00:03:48.030 CC lib/ftl/utils/ftl_property.o 00:03:48.030 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:48.030 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:48.030 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:48.030 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:48.288 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:48.288 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:48.288 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:03:48.288 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:48.288 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:48.288 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:48.288 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:48.288 LIB libspdk_iscsi.a 00:03:48.288 CC lib/ftl/base/ftl_base_dev.o 00:03:48.288 LIB libspdk_vhost.a 00:03:48.546 CC lib/ftl/base/ftl_base_bdev.o 00:03:48.546 CC lib/ftl/ftl_trace.o 00:03:48.546 SO libspdk_iscsi.so.8.0 00:03:48.546 SO libspdk_vhost.so.8.0 00:03:48.546 SYMLINK libspdk_vhost.so 00:03:48.546 SYMLINK libspdk_iscsi.so 00:03:48.805 LIB libspdk_ftl.a 00:03:49.064 LIB libspdk_nvmf.a 00:03:49.064 SO libspdk_ftl.so.9.0 00:03:49.064 SO libspdk_nvmf.so.18.0 00:03:49.322 SYMLINK libspdk_ftl.so 00:03:49.322 SYMLINK libspdk_nvmf.so 00:03:49.937 CC module/env_dpdk/env_dpdk_rpc.o 00:03:49.937 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:49.937 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:49.937 CC module/accel/ioat/accel_ioat.o 00:03:49.937 CC module/keyring/linux/keyring.o 00:03:49.937 CC module/sock/posix/posix.o 00:03:49.937 CC module/scheduler/gscheduler/gscheduler.o 00:03:49.937 CC module/accel/error/accel_error.o 00:03:49.937 CC module/keyring/file/keyring.o 00:03:49.937 CC module/blob/bdev/blob_bdev.o 00:03:49.937 LIB libspdk_env_dpdk_rpc.a 00:03:49.937 SO libspdk_env_dpdk_rpc.so.6.0 00:03:49.937 LIB libspdk_scheduler_dpdk_governor.a 00:03:49.937 CC module/keyring/linux/keyring_rpc.o 00:03:49.937 CC module/keyring/file/keyring_rpc.o 00:03:49.937 LIB libspdk_scheduler_gscheduler.a 00:03:49.937 SYMLINK libspdk_env_dpdk_rpc.so 00:03:49.937 CC module/accel/error/accel_error_rpc.o 00:03:49.937 SO libspdk_scheduler_dpdk_governor.so.4.0 00:03:49.937 LIB libspdk_scheduler_dynamic.a 00:03:49.937 SO libspdk_scheduler_gscheduler.so.4.0 00:03:49.937 CC module/accel/ioat/accel_ioat_rpc.o 00:03:49.937 SO libspdk_scheduler_dynamic.so.4.0 00:03:49.937 SYMLINK libspdk_scheduler_dpdk_governor.so 00:03:49.937 SYMLINK libspdk_scheduler_gscheduler.so 00:03:49.937 SYMLINK libspdk_scheduler_dynamic.so 00:03:50.196 LIB libspdk_keyring_linux.a 00:03:50.196 LIB libspdk_blob_bdev.a 00:03:50.196 LIB libspdk_keyring_file.a 00:03:50.196 LIB libspdk_accel_error.a 00:03:50.196 SO libspdk_keyring_linux.so.1.0 00:03:50.196 SO libspdk_blob_bdev.so.11.0 00:03:50.196 SO libspdk_keyring_file.so.1.0 00:03:50.196 LIB libspdk_accel_ioat.a 00:03:50.196 SO libspdk_accel_error.so.2.0 00:03:50.196 SO libspdk_accel_ioat.so.6.0 00:03:50.196 SYMLINK libspdk_blob_bdev.so 00:03:50.196 SYMLINK libspdk_keyring_linux.so 00:03:50.196 SYMLINK libspdk_keyring_file.so 00:03:50.196 CC module/accel/iaa/accel_iaa.o 00:03:50.196 CC module/accel/iaa/accel_iaa_rpc.o 00:03:50.196 CC module/accel/dsa/accel_dsa.o 00:03:50.196 SYMLINK libspdk_accel_error.so 00:03:50.196 CC module/accel/dsa/accel_dsa_rpc.o 00:03:50.196 SYMLINK libspdk_accel_ioat.so 00:03:50.455 LIB libspdk_accel_iaa.a 00:03:50.455 SO libspdk_accel_iaa.so.3.0 00:03:50.455 CC module/bdev/delay/vbdev_delay.o 00:03:50.455 CC module/bdev/error/vbdev_error.o 00:03:50.455 CC module/bdev/gpt/gpt.o 00:03:50.455 CC module/bdev/lvol/vbdev_lvol.o 00:03:50.455 LIB libspdk_accel_dsa.a 00:03:50.455 CC module/blobfs/bdev/blobfs_bdev.o 00:03:50.455 CC module/bdev/malloc/bdev_malloc.o 00:03:50.455 SO libspdk_accel_dsa.so.5.0 00:03:50.455 CC module/bdev/null/bdev_null.o 00:03:50.455 SYMLINK libspdk_accel_iaa.so 00:03:50.455 CC module/bdev/gpt/vbdev_gpt.o 00:03:50.714 SYMLINK libspdk_accel_dsa.so 00:03:50.714 LIB libspdk_sock_posix.a 00:03:50.714 CC module/bdev/null/bdev_null_rpc.o 00:03:50.714 SO libspdk_sock_posix.so.6.0 00:03:50.714 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:50.714 CC module/bdev/error/vbdev_error_rpc.o 00:03:50.714 SYMLINK libspdk_sock_posix.so 00:03:50.714 LIB libspdk_blobfs_bdev.a 00:03:50.714 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:50.714 LIB libspdk_bdev_null.a 00:03:50.714 LIB libspdk_bdev_gpt.a 00:03:50.714 LIB libspdk_bdev_error.a 00:03:50.714 SO libspdk_blobfs_bdev.so.6.0 00:03:50.714 SO libspdk_bdev_null.so.6.0 00:03:50.972 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:50.972 SO libspdk_bdev_gpt.so.6.0 00:03:50.972 CC module/bdev/nvme/bdev_nvme.o 00:03:50.972 SO libspdk_bdev_error.so.6.0 00:03:50.972 CC module/bdev/passthru/vbdev_passthru.o 00:03:50.972 SYMLINK libspdk_blobfs_bdev.so 00:03:50.972 SYMLINK libspdk_bdev_null.so 00:03:50.972 SYMLINK libspdk_bdev_gpt.so 00:03:50.972 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:50.972 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:50.972 SYMLINK libspdk_bdev_error.so 00:03:50.972 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:50.972 CC module/bdev/raid/bdev_raid.o 00:03:50.972 LIB libspdk_bdev_delay.a 00:03:50.972 LIB libspdk_bdev_malloc.a 00:03:50.972 SO libspdk_bdev_delay.so.6.0 00:03:50.972 SO libspdk_bdev_malloc.so.6.0 00:03:50.972 CC module/bdev/nvme/nvme_rpc.o 00:03:50.972 CC module/bdev/split/vbdev_split.o 00:03:50.972 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:51.231 SYMLINK libspdk_bdev_delay.so 00:03:51.231 SYMLINK libspdk_bdev_malloc.so 00:03:51.231 CC module/bdev/raid/bdev_raid_rpc.o 00:03:51.231 LIB libspdk_bdev_passthru.a 00:03:51.231 SO libspdk_bdev_passthru.so.6.0 00:03:51.231 CC module/bdev/xnvme/bdev_xnvme.o 00:03:51.231 SYMLINK libspdk_bdev_passthru.so 00:03:51.231 CC module/bdev/split/vbdev_split_rpc.o 00:03:51.231 LIB libspdk_bdev_lvol.a 00:03:51.231 SO libspdk_bdev_lvol.so.6.0 00:03:51.490 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:51.490 SYMLINK libspdk_bdev_lvol.so 00:03:51.490 CC module/bdev/raid/bdev_raid_sb.o 00:03:51.490 CC module/bdev/aio/bdev_aio.o 00:03:51.490 LIB libspdk_bdev_split.a 00:03:51.490 CC module/bdev/ftl/bdev_ftl.o 00:03:51.490 CC module/bdev/iscsi/bdev_iscsi.o 00:03:51.490 SO libspdk_bdev_split.so.6.0 00:03:51.490 CC module/bdev/xnvme/bdev_xnvme_rpc.o 00:03:51.490 LIB libspdk_bdev_zone_block.a 00:03:51.490 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:51.490 SYMLINK libspdk_bdev_split.so 00:03:51.490 CC module/bdev/raid/raid0.o 00:03:51.490 SO libspdk_bdev_zone_block.so.6.0 00:03:51.749 CC module/bdev/nvme/bdev_mdns_client.o 00:03:51.749 SYMLINK libspdk_bdev_zone_block.so 00:03:51.749 CC module/bdev/nvme/vbdev_opal.o 00:03:51.749 CC module/bdev/aio/bdev_aio_rpc.o 00:03:51.749 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:51.749 LIB libspdk_bdev_xnvme.a 00:03:51.749 CC module/bdev/raid/raid1.o 00:03:51.749 SO libspdk_bdev_xnvme.so.3.0 00:03:51.749 LIB libspdk_bdev_iscsi.a 00:03:51.749 SYMLINK libspdk_bdev_xnvme.so 00:03:51.749 CC module/bdev/raid/concat.o 00:03:51.749 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:51.749 SO libspdk_bdev_iscsi.so.6.0 00:03:52.008 LIB libspdk_bdev_aio.a 00:03:52.008 SO libspdk_bdev_aio.so.6.0 00:03:52.008 LIB libspdk_bdev_ftl.a 00:03:52.008 SYMLINK libspdk_bdev_iscsi.so 00:03:52.008 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:52.008 SO libspdk_bdev_ftl.so.6.0 00:03:52.008 SYMLINK libspdk_bdev_aio.so 00:03:52.008 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:52.008 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:52.008 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:52.008 SYMLINK libspdk_bdev_ftl.so 00:03:52.008 LIB libspdk_bdev_raid.a 00:03:52.266 SO libspdk_bdev_raid.so.6.0 00:03:52.266 SYMLINK libspdk_bdev_raid.so 00:03:52.526 LIB libspdk_bdev_virtio.a 00:03:52.526 SO libspdk_bdev_virtio.so.6.0 00:03:52.785 SYMLINK libspdk_bdev_virtio.so 00:03:53.044 LIB libspdk_bdev_nvme.a 00:03:53.303 SO libspdk_bdev_nvme.so.7.0 00:03:53.303 SYMLINK libspdk_bdev_nvme.so 00:03:53.871 CC module/event/subsystems/sock/sock.o 00:03:53.871 CC module/event/subsystems/iobuf/iobuf.o 00:03:53.871 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:53.871 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:53.871 CC module/event/subsystems/keyring/keyring.o 00:03:53.871 CC module/event/subsystems/vmd/vmd.o 00:03:53.871 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:54.129 CC module/event/subsystems/scheduler/scheduler.o 00:03:54.129 LIB libspdk_event_keyring.a 00:03:54.129 LIB libspdk_event_vhost_blk.a 00:03:54.129 LIB libspdk_event_sock.a 00:03:54.129 LIB libspdk_event_iobuf.a 00:03:54.129 LIB libspdk_event_vmd.a 00:03:54.129 SO libspdk_event_keyring.so.1.0 00:03:54.129 SO libspdk_event_vhost_blk.so.3.0 00:03:54.129 SO libspdk_event_sock.so.5.0 00:03:54.129 LIB libspdk_event_scheduler.a 00:03:54.129 SO libspdk_event_iobuf.so.3.0 00:03:54.129 SO libspdk_event_vmd.so.6.0 00:03:54.129 SYMLINK libspdk_event_keyring.so 00:03:54.129 SO libspdk_event_scheduler.so.4.0 00:03:54.129 SYMLINK libspdk_event_vhost_blk.so 00:03:54.129 SYMLINK libspdk_event_sock.so 00:03:54.129 SYMLINK libspdk_event_iobuf.so 00:03:54.129 SYMLINK libspdk_event_vmd.so 00:03:54.129 SYMLINK libspdk_event_scheduler.so 00:03:54.698 CC module/event/subsystems/accel/accel.o 00:03:54.698 LIB libspdk_event_accel.a 00:03:54.698 SO libspdk_event_accel.so.6.0 00:03:54.958 SYMLINK libspdk_event_accel.so 00:03:55.217 CC module/event/subsystems/bdev/bdev.o 00:03:55.481 LIB libspdk_event_bdev.a 00:03:55.481 SO libspdk_event_bdev.so.6.0 00:03:55.481 SYMLINK libspdk_event_bdev.so 00:03:56.047 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:56.047 CC module/event/subsystems/ublk/ublk.o 00:03:56.047 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:56.047 CC module/event/subsystems/scsi/scsi.o 00:03:56.047 CC module/event/subsystems/nbd/nbd.o 00:03:56.047 LIB libspdk_event_ublk.a 00:03:56.047 LIB libspdk_event_scsi.a 00:03:56.047 LIB libspdk_event_nbd.a 00:03:56.047 SO libspdk_event_ublk.so.3.0 00:03:56.047 SO libspdk_event_scsi.so.6.0 00:03:56.047 SO libspdk_event_nbd.so.6.0 00:03:56.047 LIB libspdk_event_nvmf.a 00:03:56.047 SYMLINK libspdk_event_scsi.so 00:03:56.047 SYMLINK libspdk_event_nbd.so 00:03:56.047 SYMLINK libspdk_event_ublk.so 00:03:56.306 SO libspdk_event_nvmf.so.6.0 00:03:56.306 SYMLINK libspdk_event_nvmf.so 00:03:56.565 CC module/event/subsystems/iscsi/iscsi.o 00:03:56.565 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:56.565 LIB libspdk_event_iscsi.a 00:03:56.565 LIB libspdk_event_vhost_scsi.a 00:03:56.825 SO libspdk_event_vhost_scsi.so.3.0 00:03:56.825 SO libspdk_event_iscsi.so.6.0 00:03:56.825 SYMLINK libspdk_event_iscsi.so 00:03:56.825 SYMLINK libspdk_event_vhost_scsi.so 00:03:57.085 SO libspdk.so.6.0 00:03:57.085 SYMLINK libspdk.so 00:03:57.345 CXX app/trace/trace.o 00:03:57.345 TEST_HEADER include/spdk/accel.h 00:03:57.345 TEST_HEADER include/spdk/accel_module.h 00:03:57.345 TEST_HEADER include/spdk/assert.h 00:03:57.345 TEST_HEADER include/spdk/barrier.h 00:03:57.345 TEST_HEADER include/spdk/base64.h 00:03:57.345 TEST_HEADER include/spdk/bdev.h 00:03:57.345 TEST_HEADER include/spdk/bdev_module.h 00:03:57.345 TEST_HEADER include/spdk/bdev_zone.h 00:03:57.345 TEST_HEADER include/spdk/bit_array.h 00:03:57.345 TEST_HEADER include/spdk/bit_pool.h 00:03:57.345 TEST_HEADER include/spdk/blob_bdev.h 00:03:57.345 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:57.345 TEST_HEADER include/spdk/blobfs.h 00:03:57.345 TEST_HEADER include/spdk/blob.h 00:03:57.345 TEST_HEADER include/spdk/conf.h 00:03:57.345 TEST_HEADER include/spdk/config.h 00:03:57.345 TEST_HEADER include/spdk/cpuset.h 00:03:57.345 TEST_HEADER include/spdk/crc16.h 00:03:57.345 TEST_HEADER include/spdk/crc32.h 00:03:57.345 TEST_HEADER include/spdk/crc64.h 00:03:57.345 TEST_HEADER include/spdk/dif.h 00:03:57.345 TEST_HEADER include/spdk/dma.h 00:03:57.345 TEST_HEADER include/spdk/endian.h 00:03:57.345 TEST_HEADER include/spdk/env_dpdk.h 00:03:57.345 TEST_HEADER include/spdk/env.h 00:03:57.345 TEST_HEADER include/spdk/event.h 00:03:57.345 TEST_HEADER include/spdk/fd_group.h 00:03:57.345 TEST_HEADER include/spdk/fd.h 00:03:57.345 TEST_HEADER include/spdk/file.h 00:03:57.345 TEST_HEADER include/spdk/ftl.h 00:03:57.345 TEST_HEADER include/spdk/gpt_spec.h 00:03:57.345 TEST_HEADER include/spdk/hexlify.h 00:03:57.345 TEST_HEADER include/spdk/histogram_data.h 00:03:57.345 TEST_HEADER include/spdk/idxd.h 00:03:57.345 TEST_HEADER include/spdk/idxd_spec.h 00:03:57.345 TEST_HEADER include/spdk/init.h 00:03:57.345 TEST_HEADER include/spdk/ioat.h 00:03:57.345 TEST_HEADER include/spdk/ioat_spec.h 00:03:57.345 CC test/event/event_perf/event_perf.o 00:03:57.345 TEST_HEADER include/spdk/iscsi_spec.h 00:03:57.345 TEST_HEADER include/spdk/json.h 00:03:57.345 TEST_HEADER include/spdk/jsonrpc.h 00:03:57.345 TEST_HEADER include/spdk/keyring.h 00:03:57.345 TEST_HEADER include/spdk/keyring_module.h 00:03:57.345 TEST_HEADER include/spdk/likely.h 00:03:57.345 TEST_HEADER include/spdk/log.h 00:03:57.345 CC test/blobfs/mkfs/mkfs.o 00:03:57.345 TEST_HEADER include/spdk/lvol.h 00:03:57.345 CC test/dma/test_dma/test_dma.o 00:03:57.345 CC test/accel/dif/dif.o 00:03:57.345 CC test/bdev/bdevio/bdevio.o 00:03:57.345 TEST_HEADER include/spdk/memory.h 00:03:57.345 TEST_HEADER include/spdk/mmio.h 00:03:57.345 CC examples/accel/perf/accel_perf.o 00:03:57.345 TEST_HEADER include/spdk/nbd.h 00:03:57.345 CC test/app/bdev_svc/bdev_svc.o 00:03:57.345 TEST_HEADER include/spdk/notify.h 00:03:57.345 TEST_HEADER include/spdk/nvme.h 00:03:57.345 TEST_HEADER include/spdk/nvme_intel.h 00:03:57.345 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:57.345 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:57.345 TEST_HEADER include/spdk/nvme_spec.h 00:03:57.345 TEST_HEADER include/spdk/nvme_zns.h 00:03:57.345 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:57.345 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:57.345 TEST_HEADER include/spdk/nvmf.h 00:03:57.345 TEST_HEADER include/spdk/nvmf_spec.h 00:03:57.345 TEST_HEADER include/spdk/nvmf_transport.h 00:03:57.345 TEST_HEADER include/spdk/opal.h 00:03:57.345 TEST_HEADER include/spdk/opal_spec.h 00:03:57.345 TEST_HEADER include/spdk/pci_ids.h 00:03:57.345 TEST_HEADER include/spdk/pipe.h 00:03:57.345 TEST_HEADER include/spdk/queue.h 00:03:57.345 TEST_HEADER include/spdk/reduce.h 00:03:57.345 TEST_HEADER include/spdk/rpc.h 00:03:57.345 TEST_HEADER include/spdk/scheduler.h 00:03:57.345 TEST_HEADER include/spdk/scsi.h 00:03:57.604 CC test/env/mem_callbacks/mem_callbacks.o 00:03:57.604 TEST_HEADER include/spdk/scsi_spec.h 00:03:57.604 TEST_HEADER include/spdk/sock.h 00:03:57.604 TEST_HEADER include/spdk/stdinc.h 00:03:57.604 TEST_HEADER include/spdk/string.h 00:03:57.604 TEST_HEADER include/spdk/thread.h 00:03:57.604 TEST_HEADER include/spdk/trace.h 00:03:57.604 TEST_HEADER include/spdk/trace_parser.h 00:03:57.604 TEST_HEADER include/spdk/tree.h 00:03:57.604 TEST_HEADER include/spdk/ublk.h 00:03:57.604 TEST_HEADER include/spdk/util.h 00:03:57.604 TEST_HEADER include/spdk/uuid.h 00:03:57.604 TEST_HEADER include/spdk/version.h 00:03:57.604 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:57.604 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:57.604 TEST_HEADER include/spdk/vhost.h 00:03:57.604 TEST_HEADER include/spdk/vmd.h 00:03:57.604 TEST_HEADER include/spdk/xor.h 00:03:57.604 TEST_HEADER include/spdk/zipf.h 00:03:57.604 CXX test/cpp_headers/accel.o 00:03:57.604 LINK event_perf 00:03:57.604 LINK bdev_svc 00:03:57.604 LINK mkfs 00:03:57.604 LINK mem_callbacks 00:03:57.604 CXX test/cpp_headers/accel_module.o 00:03:57.604 LINK spdk_trace 00:03:57.863 CC test/event/reactor/reactor.o 00:03:57.863 LINK test_dma 00:03:57.863 CXX test/cpp_headers/assert.o 00:03:57.863 CXX test/cpp_headers/barrier.o 00:03:57.863 LINK bdevio 00:03:57.863 CC test/env/vtophys/vtophys.o 00:03:57.863 LINK dif 00:03:57.863 LINK reactor 00:03:57.863 CC app/trace_record/trace_record.o 00:03:57.863 LINK accel_perf 00:03:57.863 CXX test/cpp_headers/base64.o 00:03:57.863 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:58.122 LINK vtophys 00:03:58.122 CXX test/cpp_headers/bdev.o 00:03:58.122 CC test/event/reactor_perf/reactor_perf.o 00:03:58.122 LINK spdk_trace_record 00:03:58.122 CC test/event/app_repeat/app_repeat.o 00:03:58.122 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:58.122 CC examples/bdev/hello_world/hello_bdev.o 00:03:58.122 CC test/lvol/esnap/esnap.o 00:03:58.381 CC examples/blob/hello_world/hello_blob.o 00:03:58.381 CC examples/ioat/perf/perf.o 00:03:58.381 CXX test/cpp_headers/bdev_module.o 00:03:58.381 LINK reactor_perf 00:03:58.381 LINK app_repeat 00:03:58.381 LINK env_dpdk_post_init 00:03:58.381 LINK nvme_fuzz 00:03:58.381 CXX test/cpp_headers/bdev_zone.o 00:03:58.381 LINK hello_bdev 00:03:58.381 CC app/nvmf_tgt/nvmf_main.o 00:03:58.381 LINK hello_blob 00:03:58.381 LINK ioat_perf 00:03:58.642 CXX test/cpp_headers/bit_array.o 00:03:58.642 CC test/env/memory/memory_ut.o 00:03:58.642 CC test/event/scheduler/scheduler.o 00:03:58.642 LINK nvmf_tgt 00:03:58.642 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:58.642 CC examples/nvme/hello_world/hello_world.o 00:03:58.642 CC examples/ioat/verify/verify.o 00:03:58.642 CXX test/cpp_headers/bit_pool.o 00:03:58.913 CC examples/bdev/bdevperf/bdevperf.o 00:03:58.913 CC examples/blob/cli/blobcli.o 00:03:58.913 LINK hello_world 00:03:58.913 LINK scheduler 00:03:58.913 CXX test/cpp_headers/blob_bdev.o 00:03:58.913 CC app/iscsi_tgt/iscsi_tgt.o 00:03:58.913 LINK verify 00:03:59.190 CXX test/cpp_headers/blobfs_bdev.o 00:03:59.190 CXX test/cpp_headers/blobfs.o 00:03:59.190 CC examples/nvme/reconnect/reconnect.o 00:03:59.190 LINK iscsi_tgt 00:03:59.190 CXX test/cpp_headers/blob.o 00:03:59.190 CC examples/sock/hello_world/hello_sock.o 00:03:59.190 LINK blobcli 00:03:59.449 LINK memory_ut 00:03:59.449 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:59.449 CXX test/cpp_headers/conf.o 00:03:59.449 LINK reconnect 00:03:59.449 CC app/spdk_tgt/spdk_tgt.o 00:03:59.449 LINK hello_sock 00:03:59.449 CXX test/cpp_headers/config.o 00:03:59.449 CXX test/cpp_headers/cpuset.o 00:03:59.708 CC test/env/pci/pci_ut.o 00:03:59.708 LINK spdk_tgt 00:03:59.708 LINK bdevperf 00:03:59.708 CC app/spdk_lspci/spdk_lspci.o 00:03:59.708 CC app/spdk_nvme_perf/perf.o 00:03:59.708 CXX test/cpp_headers/crc16.o 00:03:59.708 CC app/spdk_nvme_identify/identify.o 00:03:59.708 LINK spdk_lspci 00:03:59.967 LINK nvme_manage 00:03:59.967 CXX test/cpp_headers/crc32.o 00:03:59.967 CC app/spdk_nvme_discover/discovery_aer.o 00:03:59.967 CC app/spdk_top/spdk_top.o 00:03:59.967 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:59.967 LINK pci_ut 00:03:59.967 CXX test/cpp_headers/crc64.o 00:03:59.967 CC examples/nvme/arbitration/arbitration.o 00:03:59.967 LINK spdk_nvme_discover 00:04:00.226 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:04:00.226 CXX test/cpp_headers/dif.o 00:04:00.226 CXX test/cpp_headers/dma.o 00:04:00.485 CC app/vhost/vhost.o 00:04:00.485 LINK arbitration 00:04:00.485 LINK iscsi_fuzz 00:04:00.485 LINK spdk_nvme_perf 00:04:00.485 CC examples/nvme/hotplug/hotplug.o 00:04:00.485 CXX test/cpp_headers/endian.o 00:04:00.485 CXX test/cpp_headers/env_dpdk.o 00:04:00.485 LINK vhost 00:04:00.744 LINK spdk_nvme_identify 00:04:00.744 CXX test/cpp_headers/env.o 00:04:00.744 LINK vhost_fuzz 00:04:00.744 LINK hotplug 00:04:00.744 CC app/spdk_dd/spdk_dd.o 00:04:00.744 CXX test/cpp_headers/event.o 00:04:00.744 LINK spdk_top 00:04:00.744 CC test/app/histogram_perf/histogram_perf.o 00:04:00.744 CXX test/cpp_headers/fd_group.o 00:04:00.744 CC test/app/jsoncat/jsoncat.o 00:04:01.003 CC test/app/stub/stub.o 00:04:01.003 CC app/fio/nvme/fio_plugin.o 00:04:01.003 LINK histogram_perf 00:04:01.003 CC examples/nvme/cmb_copy/cmb_copy.o 00:04:01.003 CXX test/cpp_headers/fd.o 00:04:01.003 LINK jsoncat 00:04:01.003 CC examples/nvme/abort/abort.o 00:04:01.003 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:04:01.003 LINK stub 00:04:01.003 CXX test/cpp_headers/file.o 00:04:01.262 LINK spdk_dd 00:04:01.262 CXX test/cpp_headers/ftl.o 00:04:01.262 LINK cmb_copy 00:04:01.262 LINK pmr_persistence 00:04:01.262 CC test/rpc_client/rpc_client_test.o 00:04:01.262 CXX test/cpp_headers/gpt_spec.o 00:04:01.262 CC test/nvme/aer/aer.o 00:04:01.262 CXX test/cpp_headers/hexlify.o 00:04:01.521 CC examples/vmd/lsvmd/lsvmd.o 00:04:01.521 CC examples/vmd/led/led.o 00:04:01.521 LINK abort 00:04:01.521 LINK rpc_client_test 00:04:01.521 CC examples/nvmf/nvmf/nvmf.o 00:04:01.521 CXX test/cpp_headers/histogram_data.o 00:04:01.521 LINK lsvmd 00:04:01.521 LINK spdk_nvme 00:04:01.521 LINK led 00:04:01.521 LINK aer 00:04:01.521 CC test/thread/poller_perf/poller_perf.o 00:04:01.779 CXX test/cpp_headers/idxd.o 00:04:01.779 CC test/nvme/reset/reset.o 00:04:01.779 CC app/fio/bdev/fio_plugin.o 00:04:01.779 CC test/nvme/sgl/sgl.o 00:04:01.779 LINK poller_perf 00:04:01.779 CC test/nvme/e2edp/nvme_dp.o 00:04:01.779 LINK nvmf 00:04:01.779 CXX test/cpp_headers/idxd_spec.o 00:04:01.779 CC test/nvme/overhead/overhead.o 00:04:02.038 CC examples/util/zipf/zipf.o 00:04:02.038 LINK reset 00:04:02.038 CXX test/cpp_headers/init.o 00:04:02.039 CC test/nvme/err_injection/err_injection.o 00:04:02.039 LINK sgl 00:04:02.039 LINK zipf 00:04:02.039 LINK nvme_dp 00:04:02.297 LINK overhead 00:04:02.297 CXX test/cpp_headers/ioat.o 00:04:02.297 CC examples/thread/thread/thread_ex.o 00:04:02.297 CC test/nvme/startup/startup.o 00:04:02.297 LINK err_injection 00:04:02.297 LINK spdk_bdev 00:04:02.297 CC test/nvme/reserve/reserve.o 00:04:02.297 CC test/nvme/simple_copy/simple_copy.o 00:04:02.297 CXX test/cpp_headers/ioat_spec.o 00:04:02.297 CC test/nvme/connect_stress/connect_stress.o 00:04:02.297 LINK startup 00:04:02.297 CC test/nvme/boot_partition/boot_partition.o 00:04:02.556 CC test/nvme/compliance/nvme_compliance.o 00:04:02.556 LINK thread 00:04:02.556 CC test/nvme/fused_ordering/fused_ordering.o 00:04:02.556 CXX test/cpp_headers/iscsi_spec.o 00:04:02.556 LINK reserve 00:04:02.556 LINK connect_stress 00:04:02.556 LINK simple_copy 00:04:02.556 LINK boot_partition 00:04:02.556 CXX test/cpp_headers/json.o 00:04:02.556 CC test/nvme/doorbell_aers/doorbell_aers.o 00:04:02.556 LINK fused_ordering 00:04:02.815 CC test/nvme/fdp/fdp.o 00:04:02.815 LINK nvme_compliance 00:04:02.815 CXX test/cpp_headers/jsonrpc.o 00:04:02.815 CXX test/cpp_headers/keyring.o 00:04:02.815 CC test/nvme/cuse/cuse.o 00:04:02.815 LINK doorbell_aers 00:04:02.815 CXX test/cpp_headers/keyring_module.o 00:04:02.815 CC examples/idxd/perf/perf.o 00:04:02.815 CC examples/interrupt_tgt/interrupt_tgt.o 00:04:02.815 CXX test/cpp_headers/likely.o 00:04:02.815 CXX test/cpp_headers/log.o 00:04:03.074 CXX test/cpp_headers/lvol.o 00:04:03.074 CXX test/cpp_headers/memory.o 00:04:03.074 CXX test/cpp_headers/mmio.o 00:04:03.074 LINK interrupt_tgt 00:04:03.074 CXX test/cpp_headers/nbd.o 00:04:03.074 CXX test/cpp_headers/notify.o 00:04:03.074 LINK fdp 00:04:03.074 CXX test/cpp_headers/nvme.o 00:04:03.074 CXX test/cpp_headers/nvme_intel.o 00:04:03.074 CXX test/cpp_headers/nvme_ocssd.o 00:04:03.333 LINK idxd_perf 00:04:03.333 CXX test/cpp_headers/nvme_ocssd_spec.o 00:04:03.333 CXX test/cpp_headers/nvme_spec.o 00:04:03.333 CXX test/cpp_headers/nvme_zns.o 00:04:03.333 CXX test/cpp_headers/nvmf_cmd.o 00:04:03.333 CXX test/cpp_headers/nvmf_fc_spec.o 00:04:03.333 CXX test/cpp_headers/nvmf.o 00:04:03.333 CXX test/cpp_headers/nvmf_spec.o 00:04:03.333 CXX test/cpp_headers/nvmf_transport.o 00:04:03.333 CXX test/cpp_headers/opal.o 00:04:03.333 CXX test/cpp_headers/opal_spec.o 00:04:03.333 CXX test/cpp_headers/pci_ids.o 00:04:03.333 CXX test/cpp_headers/pipe.o 00:04:03.333 CXX test/cpp_headers/queue.o 00:04:03.591 CXX test/cpp_headers/reduce.o 00:04:03.591 CXX test/cpp_headers/rpc.o 00:04:03.592 CXX test/cpp_headers/scheduler.o 00:04:03.592 CXX test/cpp_headers/scsi.o 00:04:03.592 CXX test/cpp_headers/scsi_spec.o 00:04:03.592 CXX test/cpp_headers/sock.o 00:04:03.592 CXX test/cpp_headers/stdinc.o 00:04:03.592 CXX test/cpp_headers/string.o 00:04:03.592 LINK esnap 00:04:03.592 CXX test/cpp_headers/thread.o 00:04:03.592 CXX test/cpp_headers/trace.o 00:04:03.592 CXX test/cpp_headers/trace_parser.o 00:04:03.592 CXX test/cpp_headers/tree.o 00:04:03.592 CXX test/cpp_headers/ublk.o 00:04:03.850 CXX test/cpp_headers/util.o 00:04:03.850 CXX test/cpp_headers/uuid.o 00:04:03.850 CXX test/cpp_headers/version.o 00:04:03.850 CXX test/cpp_headers/vfio_user_pci.o 00:04:03.850 CXX test/cpp_headers/vfio_user_spec.o 00:04:03.850 CXX test/cpp_headers/vhost.o 00:04:03.850 CXX test/cpp_headers/vmd.o 00:04:03.850 CXX test/cpp_headers/xor.o 00:04:03.850 CXX test/cpp_headers/zipf.o 00:04:04.109 LINK cuse 00:04:04.109 00:04:04.109 real 0m52.409s 00:04:04.109 user 4m28.396s 00:04:04.109 sys 1m16.641s 00:04:04.109 00:08:18 make -- common/autotest_common.sh@1122 -- $ xtrace_disable 00:04:04.109 ************************************ 00:04:04.109 END TEST make 00:04:04.109 ************************************ 00:04:04.109 00:08:18 make -- common/autotest_common.sh@10 -- $ set +x 00:04:04.368 00:08:18 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:04:04.368 00:08:18 -- pm/common@29 -- $ signal_monitor_resources TERM 00:04:04.368 00:08:18 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:04:04.368 00:08:18 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:04.368 00:08:18 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:04:04.368 00:08:18 -- pm/common@44 -- $ pid=5924 00:04:04.368 00:08:18 -- pm/common@50 -- $ kill -TERM 5924 00:04:04.368 00:08:18 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:04.368 00:08:18 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:04:04.368 00:08:18 -- pm/common@44 -- $ pid=5926 00:04:04.368 00:08:18 -- pm/common@50 -- $ kill -TERM 5926 00:04:04.368 00:08:18 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:04.368 00:08:18 -- nvmf/common.sh@7 -- # uname -s 00:04:04.368 00:08:18 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:04.368 00:08:18 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:04.368 00:08:18 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:04.368 00:08:18 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:04.368 00:08:18 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:04.368 00:08:18 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:04.368 00:08:18 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:04.368 00:08:18 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:04.368 00:08:18 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:04.368 00:08:18 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:04.368 00:08:19 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:7893f947-64f4-4a28-aaff-df0e6993fd1b 00:04:04.368 00:08:19 -- nvmf/common.sh@18 -- # NVME_HOSTID=7893f947-64f4-4a28-aaff-df0e6993fd1b 00:04:04.368 00:08:19 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:04.368 00:08:19 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:04.368 00:08:19 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:04.368 00:08:19 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:04.368 00:08:19 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:04.368 00:08:19 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:04.368 00:08:19 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:04.368 00:08:19 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:04.368 00:08:19 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:04.368 00:08:19 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:04.368 00:08:19 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:04.368 00:08:19 -- paths/export.sh@5 -- # export PATH 00:04:04.368 00:08:19 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:04.368 00:08:19 -- nvmf/common.sh@47 -- # : 0 00:04:04.368 00:08:19 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:04:04.368 00:08:19 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:04:04.368 00:08:19 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:04.368 00:08:19 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:04.368 00:08:19 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:04.368 00:08:19 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:04:04.368 00:08:19 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:04:04.368 00:08:19 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:04:04.368 00:08:19 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:04:04.368 00:08:19 -- spdk/autotest.sh@32 -- # uname -s 00:04:04.368 00:08:19 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:04:04.368 00:08:19 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:04:04.368 00:08:19 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:04.628 00:08:19 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:04:04.628 00:08:19 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:04.628 00:08:19 -- spdk/autotest.sh@44 -- # modprobe nbd 00:04:04.628 00:08:19 -- spdk/autotest.sh@46 -- # type -P udevadm 00:04:04.628 00:08:19 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:04:04.628 00:08:19 -- spdk/autotest.sh@48 -- # udevadm_pid=65533 00:04:04.628 00:08:19 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:04:04.628 00:08:19 -- pm/common@17 -- # local monitor 00:04:04.628 00:08:19 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:04:04.628 00:08:19 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:04.628 00:08:19 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:04.628 00:08:19 -- pm/common@25 -- # sleep 1 00:04:04.628 00:08:19 -- pm/common@21 -- # date +%s 00:04:04.628 00:08:19 -- pm/common@21 -- # date +%s 00:04:04.628 00:08:19 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1721693299 00:04:04.628 00:08:19 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1721693299 00:04:04.628 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1721693299_collect-vmstat.pm.log 00:04:04.628 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1721693299_collect-cpu-load.pm.log 00:04:05.565 00:08:20 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:04:05.565 00:08:20 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:04:05.565 00:08:20 -- common/autotest_common.sh@720 -- # xtrace_disable 00:04:05.565 00:08:20 -- common/autotest_common.sh@10 -- # set +x 00:04:05.565 00:08:20 -- spdk/autotest.sh@59 -- # create_test_list 00:04:05.565 00:08:20 -- common/autotest_common.sh@744 -- # xtrace_disable 00:04:05.566 00:08:20 -- common/autotest_common.sh@10 -- # set +x 00:04:05.566 00:08:20 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:04:05.566 00:08:20 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:04:05.566 00:08:20 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:04:05.566 00:08:20 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:04:05.566 00:08:20 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:04:05.566 00:08:20 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:04:05.566 00:08:20 -- common/autotest_common.sh@1451 -- # uname 00:04:05.566 00:08:20 -- common/autotest_common.sh@1451 -- # '[' Linux = FreeBSD ']' 00:04:05.566 00:08:20 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:04:05.566 00:08:20 -- common/autotest_common.sh@1471 -- # uname 00:04:05.566 00:08:20 -- common/autotest_common.sh@1471 -- # [[ Linux = FreeBSD ]] 00:04:05.566 00:08:20 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:04:05.566 00:08:20 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:04:05.566 00:08:20 -- spdk/autotest.sh@72 -- # hash lcov 00:04:05.566 00:08:20 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:04:05.566 00:08:20 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:04:05.566 --rc lcov_branch_coverage=1 00:04:05.566 --rc lcov_function_coverage=1 00:04:05.566 --rc genhtml_branch_coverage=1 00:04:05.566 --rc genhtml_function_coverage=1 00:04:05.566 --rc genhtml_legend=1 00:04:05.566 --rc geninfo_all_blocks=1 00:04:05.566 ' 00:04:05.566 00:08:20 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:04:05.566 --rc lcov_branch_coverage=1 00:04:05.566 --rc lcov_function_coverage=1 00:04:05.566 --rc genhtml_branch_coverage=1 00:04:05.566 --rc genhtml_function_coverage=1 00:04:05.566 --rc genhtml_legend=1 00:04:05.566 --rc geninfo_all_blocks=1 00:04:05.566 ' 00:04:05.566 00:08:20 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:04:05.566 --rc lcov_branch_coverage=1 00:04:05.566 --rc lcov_function_coverage=1 00:04:05.566 --rc genhtml_branch_coverage=1 00:04:05.566 --rc genhtml_function_coverage=1 00:04:05.566 --rc genhtml_legend=1 00:04:05.566 --rc geninfo_all_blocks=1 00:04:05.566 --no-external' 00:04:05.566 00:08:20 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:04:05.566 --rc lcov_branch_coverage=1 00:04:05.566 --rc lcov_function_coverage=1 00:04:05.566 --rc genhtml_branch_coverage=1 00:04:05.566 --rc genhtml_function_coverage=1 00:04:05.566 --rc genhtml_legend=1 00:04:05.566 --rc geninfo_all_blocks=1 00:04:05.566 --no-external' 00:04:05.566 00:08:20 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:04:05.825 lcov: LCOV version 1.14 00:04:05.825 00:08:20 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:04:20.730 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:04:20.730 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:04:32.961 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno:no functions found 00:04:32.961 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno 00:04:32.961 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:04:32.961 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno 00:04:32.961 /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno:no functions found 00:04:32.961 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno 00:04:32.961 /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno:no functions found 00:04:32.961 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno 00:04:32.961 /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno:no functions found 00:04:32.961 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno 00:04:32.961 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno:no functions found 00:04:32.961 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno 00:04:32.961 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:04:32.961 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno 00:04:32.961 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:04:32.961 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno 00:04:32.961 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:04:32.961 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno 00:04:32.961 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:04:32.961 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno 00:04:32.961 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:04:32.961 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno 00:04:32.961 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:04:32.961 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno 00:04:32.961 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:04:32.961 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno 00:04:32.961 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno:no functions found 00:04:32.961 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno 00:04:32.961 /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno:no functions found 00:04:32.961 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno 00:04:32.961 /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno:no functions found 00:04:32.961 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno 00:04:32.961 /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:04:32.961 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno 00:04:32.961 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno:no functions found 00:04:32.961 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno 00:04:32.961 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno:no functions found 00:04:32.961 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno 00:04:32.961 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno:no functions found 00:04:32.961 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno 00:04:32.961 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno:no functions found 00:04:32.961 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno 00:04:32.961 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno:no functions found 00:04:32.961 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno 00:04:32.961 /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno:no functions found 00:04:32.961 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno 00:04:32.961 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:04:32.961 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno 00:04:32.961 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno:no functions found 00:04:32.961 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno 00:04:32.961 /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno:no functions found 00:04:32.961 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno 00:04:32.961 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:04:32.961 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno 00:04:32.961 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno:no functions found 00:04:32.961 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno 00:04:32.961 /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno:no functions found 00:04:32.961 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno 00:04:32.961 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno:no functions found 00:04:32.961 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno 00:04:32.961 /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:04:32.961 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno 00:04:32.961 /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:04:32.961 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno 00:04:32.961 /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:04:32.961 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno 00:04:32.961 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno:no functions found 00:04:32.961 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno 00:04:32.961 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:04:32.961 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno 00:04:32.961 /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno:no functions found 00:04:32.961 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno 00:04:32.961 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno:no functions found 00:04:32.961 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno 00:04:32.961 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:04:32.961 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno 00:04:32.961 /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:04:32.961 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno 00:04:32.961 /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno:no functions found 00:04:32.961 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno 00:04:32.961 /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:04:32.961 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno 00:04:32.961 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno:no functions found 00:04:32.961 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno 00:04:32.961 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:04:32.961 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno 00:04:32.961 /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno:no functions found 00:04:32.961 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno 00:04:32.961 /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno:no functions found 00:04:32.962 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno 00:04:32.962 /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno:no functions found 00:04:32.962 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno 00:04:32.962 /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno:no functions found 00:04:32.962 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno 00:04:32.962 /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno:no functions found 00:04:32.962 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno 00:04:32.962 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno:no functions found 00:04:32.962 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno 00:04:32.962 /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno:no functions found 00:04:32.962 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno 00:04:32.962 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno:no functions found 00:04:32.962 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno 00:04:32.962 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:04:32.962 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno 00:04:32.962 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:04:32.962 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno 00:04:32.962 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:04:32.962 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:04:32.962 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:04:32.962 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno 00:04:32.962 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:04:32.962 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno 00:04:32.962 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:04:32.962 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno 00:04:32.962 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:04:32.962 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:04:32.962 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:04:32.962 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno 00:04:32.962 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:04:32.962 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno 00:04:32.962 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:04:32.962 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno 00:04:32.962 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno:no functions found 00:04:32.962 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno 00:04:32.962 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:04:32.962 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno 00:04:32.962 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:04:32.962 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno 00:04:32.962 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno:no functions found 00:04:32.962 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno 00:04:32.962 /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno:no functions found 00:04:32.962 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno 00:04:32.962 /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno:no functions found 00:04:32.962 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno 00:04:32.962 /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno:no functions found 00:04:32.962 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno 00:04:32.962 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:04:32.962 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno 00:04:32.962 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno:no functions found 00:04:32.962 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno 00:04:32.962 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:04:32.962 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno 00:04:32.962 /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno:no functions found 00:04:32.962 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno 00:04:32.962 /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:04:32.962 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno 00:04:32.962 /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno:no functions found 00:04:32.962 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno 00:04:32.962 /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno:no functions found 00:04:32.962 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno 00:04:32.962 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno:no functions found 00:04:32.962 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno 00:04:32.962 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:04:32.962 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno 00:04:32.962 /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno:no functions found 00:04:32.962 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno 00:04:32.962 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno:no functions found 00:04:32.962 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno 00:04:32.962 /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno:no functions found 00:04:32.962 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno 00:04:32.962 /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno:no functions found 00:04:32.962 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno 00:04:32.962 /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno:no functions found 00:04:32.962 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno 00:04:32.962 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:04:32.962 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno 00:04:32.962 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:04:32.962 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno 00:04:32.962 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno:no functions found 00:04:32.962 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno 00:04:32.962 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno:no functions found 00:04:32.962 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno 00:04:32.962 /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno:no functions found 00:04:32.962 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno 00:04:32.962 /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno:no functions found 00:04:32.962 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno 00:04:35.493 00:08:49 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:04:35.493 00:08:49 -- common/autotest_common.sh@720 -- # xtrace_disable 00:04:35.493 00:08:49 -- common/autotest_common.sh@10 -- # set +x 00:04:35.493 00:08:49 -- spdk/autotest.sh@91 -- # rm -f 00:04:35.493 00:08:49 -- spdk/autotest.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:35.751 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:36.319 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:04:36.319 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:04:36.319 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:04:36.319 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:04:36.319 00:08:50 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:04:36.319 00:08:50 -- common/autotest_common.sh@1665 -- # zoned_devs=() 00:04:36.319 00:08:50 -- common/autotest_common.sh@1665 -- # local -gA zoned_devs 00:04:36.319 00:08:50 -- common/autotest_common.sh@1666 -- # local nvme bdf 00:04:36.319 00:08:50 -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:04:36.319 00:08:50 -- common/autotest_common.sh@1669 -- # is_block_zoned nvme0n1 00:04:36.319 00:08:50 -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:04:36.319 00:08:50 -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:36.319 00:08:50 -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:04:36.319 00:08:50 -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:04:36.319 00:08:50 -- common/autotest_common.sh@1669 -- # is_block_zoned nvme1n1 00:04:36.319 00:08:50 -- common/autotest_common.sh@1658 -- # local device=nvme1n1 00:04:36.319 00:08:50 -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:04:36.319 00:08:50 -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:04:36.319 00:08:50 -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:04:36.319 00:08:50 -- common/autotest_common.sh@1669 -- # is_block_zoned nvme2n1 00:04:36.319 00:08:50 -- common/autotest_common.sh@1658 -- # local device=nvme2n1 00:04:36.319 00:08:50 -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:04:36.320 00:08:50 -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:04:36.320 00:08:50 -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:04:36.320 00:08:50 -- common/autotest_common.sh@1669 -- # is_block_zoned nvme2n2 00:04:36.320 00:08:50 -- common/autotest_common.sh@1658 -- # local device=nvme2n2 00:04:36.320 00:08:50 -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:04:36.320 00:08:50 -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:04:36.320 00:08:50 -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:04:36.320 00:08:50 -- common/autotest_common.sh@1669 -- # is_block_zoned nvme2n3 00:04:36.320 00:08:50 -- common/autotest_common.sh@1658 -- # local device=nvme2n3 00:04:36.320 00:08:50 -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:04:36.320 00:08:50 -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:04:36.320 00:08:50 -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:04:36.320 00:08:50 -- common/autotest_common.sh@1669 -- # is_block_zoned nvme3c3n1 00:04:36.320 00:08:51 -- common/autotest_common.sh@1658 -- # local device=nvme3c3n1 00:04:36.320 00:08:51 -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:04:36.320 00:08:51 -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:04:36.320 00:08:51 -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:04:36.320 00:08:51 -- common/autotest_common.sh@1669 -- # is_block_zoned nvme3n1 00:04:36.320 00:08:51 -- common/autotest_common.sh@1658 -- # local device=nvme3n1 00:04:36.320 00:08:51 -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:04:36.320 00:08:51 -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:04:36.320 00:08:51 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:04:36.320 00:08:51 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:04:36.579 00:08:51 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:04:36.579 00:08:51 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:04:36.579 00:08:51 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:04:36.579 00:08:51 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:04:36.579 No valid GPT data, bailing 00:04:36.579 00:08:51 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:36.579 00:08:51 -- scripts/common.sh@391 -- # pt= 00:04:36.579 00:08:51 -- scripts/common.sh@392 -- # return 1 00:04:36.579 00:08:51 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:04:36.579 1+0 records in 00:04:36.579 1+0 records out 00:04:36.579 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0168983 s, 62.1 MB/s 00:04:36.579 00:08:51 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:04:36.579 00:08:51 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:04:36.579 00:08:51 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n1 00:04:36.579 00:08:51 -- scripts/common.sh@378 -- # local block=/dev/nvme1n1 pt 00:04:36.579 00:08:51 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:04:36.579 No valid GPT data, bailing 00:04:36.579 00:08:51 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:04:36.579 00:08:51 -- scripts/common.sh@391 -- # pt= 00:04:36.579 00:08:51 -- scripts/common.sh@392 -- # return 1 00:04:36.579 00:08:51 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:04:36.579 1+0 records in 00:04:36.579 1+0 records out 00:04:36.579 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00598094 s, 175 MB/s 00:04:36.579 00:08:51 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:04:36.579 00:08:51 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:04:36.579 00:08:51 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme2n1 00:04:36.579 00:08:51 -- scripts/common.sh@378 -- # local block=/dev/nvme2n1 pt 00:04:36.579 00:08:51 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n1 00:04:36.579 No valid GPT data, bailing 00:04:36.579 00:08:51 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme2n1 00:04:36.579 00:08:51 -- scripts/common.sh@391 -- # pt= 00:04:36.579 00:08:51 -- scripts/common.sh@392 -- # return 1 00:04:36.579 00:08:51 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme2n1 bs=1M count=1 00:04:36.579 1+0 records in 00:04:36.579 1+0 records out 00:04:36.579 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00603214 s, 174 MB/s 00:04:36.579 00:08:51 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:04:36.579 00:08:51 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:04:36.579 00:08:51 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme2n2 00:04:36.579 00:08:51 -- scripts/common.sh@378 -- # local block=/dev/nvme2n2 pt 00:04:36.579 00:08:51 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n2 00:04:36.838 No valid GPT data, bailing 00:04:36.838 00:08:51 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme2n2 00:04:36.838 00:08:51 -- scripts/common.sh@391 -- # pt= 00:04:36.838 00:08:51 -- scripts/common.sh@392 -- # return 1 00:04:36.838 00:08:51 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme2n2 bs=1M count=1 00:04:36.838 1+0 records in 00:04:36.838 1+0 records out 00:04:36.838 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00625977 s, 168 MB/s 00:04:36.839 00:08:51 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:04:36.839 00:08:51 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:04:36.839 00:08:51 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme2n3 00:04:36.839 00:08:51 -- scripts/common.sh@378 -- # local block=/dev/nvme2n3 pt 00:04:36.839 00:08:51 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n3 00:04:36.839 No valid GPT data, bailing 00:04:36.839 00:08:51 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme2n3 00:04:36.839 00:08:51 -- scripts/common.sh@391 -- # pt= 00:04:36.839 00:08:51 -- scripts/common.sh@392 -- # return 1 00:04:36.839 00:08:51 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme2n3 bs=1M count=1 00:04:36.839 1+0 records in 00:04:36.839 1+0 records out 00:04:36.839 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00598041 s, 175 MB/s 00:04:36.839 00:08:51 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:04:36.839 00:08:51 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:04:36.839 00:08:51 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme3n1 00:04:36.839 00:08:51 -- scripts/common.sh@378 -- # local block=/dev/nvme3n1 pt 00:04:36.839 00:08:51 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme3n1 00:04:36.839 No valid GPT data, bailing 00:04:36.839 00:08:51 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme3n1 00:04:36.839 00:08:51 -- scripts/common.sh@391 -- # pt= 00:04:36.839 00:08:51 -- scripts/common.sh@392 -- # return 1 00:04:36.839 00:08:51 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme3n1 bs=1M count=1 00:04:36.839 1+0 records in 00:04:36.839 1+0 records out 00:04:36.839 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00615427 s, 170 MB/s 00:04:36.839 00:08:51 -- spdk/autotest.sh@118 -- # sync 00:04:37.097 00:08:51 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:04:37.097 00:08:51 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:04:37.097 00:08:51 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:04:39.632 00:08:54 -- spdk/autotest.sh@124 -- # uname -s 00:04:39.632 00:08:54 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:04:39.632 00:08:54 -- spdk/autotest.sh@125 -- # run_test setup.sh /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:04:39.632 00:08:54 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:39.632 00:08:54 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:39.632 00:08:54 -- common/autotest_common.sh@10 -- # set +x 00:04:39.632 ************************************ 00:04:39.632 START TEST setup.sh 00:04:39.632 ************************************ 00:04:39.632 00:08:54 setup.sh -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:04:39.893 * Looking for test storage... 00:04:39.893 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:39.893 00:08:54 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:04:39.893 00:08:54 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:04:39.893 00:08:54 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:04:39.893 00:08:54 setup.sh -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:39.893 00:08:54 setup.sh -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:39.893 00:08:54 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:39.893 ************************************ 00:04:39.893 START TEST acl 00:04:39.893 ************************************ 00:04:39.893 00:08:54 setup.sh.acl -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:04:39.893 * Looking for test storage... 00:04:39.893 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:39.893 00:08:54 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:04:39.893 00:08:54 setup.sh.acl -- common/autotest_common.sh@1665 -- # zoned_devs=() 00:04:39.893 00:08:54 setup.sh.acl -- common/autotest_common.sh@1665 -- # local -gA zoned_devs 00:04:39.893 00:08:54 setup.sh.acl -- common/autotest_common.sh@1666 -- # local nvme bdf 00:04:39.893 00:08:54 setup.sh.acl -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:04:39.893 00:08:54 setup.sh.acl -- common/autotest_common.sh@1669 -- # is_block_zoned nvme0n1 00:04:39.893 00:08:54 setup.sh.acl -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:04:39.893 00:08:54 setup.sh.acl -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:39.893 00:08:54 setup.sh.acl -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:04:39.893 00:08:54 setup.sh.acl -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:04:39.893 00:08:54 setup.sh.acl -- common/autotest_common.sh@1669 -- # is_block_zoned nvme1n1 00:04:39.893 00:08:54 setup.sh.acl -- common/autotest_common.sh@1658 -- # local device=nvme1n1 00:04:39.893 00:08:54 setup.sh.acl -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:04:39.893 00:08:54 setup.sh.acl -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:04:39.893 00:08:54 setup.sh.acl -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:04:39.893 00:08:54 setup.sh.acl -- common/autotest_common.sh@1669 -- # is_block_zoned nvme2n1 00:04:39.893 00:08:54 setup.sh.acl -- common/autotest_common.sh@1658 -- # local device=nvme2n1 00:04:39.893 00:08:54 setup.sh.acl -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:04:39.893 00:08:54 setup.sh.acl -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:04:39.893 00:08:54 setup.sh.acl -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:04:39.893 00:08:54 setup.sh.acl -- common/autotest_common.sh@1669 -- # is_block_zoned nvme2n2 00:04:39.893 00:08:54 setup.sh.acl -- common/autotest_common.sh@1658 -- # local device=nvme2n2 00:04:39.893 00:08:54 setup.sh.acl -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:04:39.893 00:08:54 setup.sh.acl -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:04:39.893 00:08:54 setup.sh.acl -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:04:40.176 00:08:54 setup.sh.acl -- common/autotest_common.sh@1669 -- # is_block_zoned nvme2n3 00:04:40.176 00:08:54 setup.sh.acl -- common/autotest_common.sh@1658 -- # local device=nvme2n3 00:04:40.176 00:08:54 setup.sh.acl -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:04:40.176 00:08:54 setup.sh.acl -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:04:40.176 00:08:54 setup.sh.acl -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:04:40.176 00:08:54 setup.sh.acl -- common/autotest_common.sh@1669 -- # is_block_zoned nvme3c3n1 00:04:40.176 00:08:54 setup.sh.acl -- common/autotest_common.sh@1658 -- # local device=nvme3c3n1 00:04:40.176 00:08:54 setup.sh.acl -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:04:40.176 00:08:54 setup.sh.acl -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:04:40.176 00:08:54 setup.sh.acl -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:04:40.176 00:08:54 setup.sh.acl -- common/autotest_common.sh@1669 -- # is_block_zoned nvme3n1 00:04:40.176 00:08:54 setup.sh.acl -- common/autotest_common.sh@1658 -- # local device=nvme3n1 00:04:40.176 00:08:54 setup.sh.acl -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:04:40.176 00:08:54 setup.sh.acl -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:04:40.176 00:08:54 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:04:40.176 00:08:54 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:04:40.176 00:08:54 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:04:40.176 00:08:54 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:04:40.176 00:08:54 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:04:40.176 00:08:54 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:40.176 00:08:54 setup.sh.acl -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:41.557 00:08:56 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:04:41.557 00:08:56 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:04:41.557 00:08:56 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:41.557 00:08:56 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:04:41.557 00:08:56 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:04:41.557 00:08:56 setup.sh.acl -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:04:42.124 00:08:56 setup.sh.acl -- setup/acl.sh@19 -- # [[ (1af4 == *:*:*.* ]] 00:04:42.124 00:08:56 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:42.124 00:08:56 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:42.692 Hugepages 00:04:42.692 node hugesize free / total 00:04:42.692 00:08:57 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:04:42.692 00:08:57 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:42.692 00:08:57 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:42.692 00:04:42.692 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:42.692 00:08:57 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:04:42.692 00:08:57 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:42.692 00:08:57 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:42.951 00:08:57 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:03.0 == *:*:*.* ]] 00:04:42.951 00:08:57 setup.sh.acl -- setup/acl.sh@20 -- # [[ virtio-pci == nvme ]] 00:04:42.951 00:08:57 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:42.951 00:08:57 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:42.951 00:08:57 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:10.0 == *:*:*.* ]] 00:04:42.951 00:08:57 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:04:42.951 00:08:57 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\0\.\0* ]] 00:04:42.951 00:08:57 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:04:42.951 00:08:57 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:04:42.951 00:08:57 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:43.209 00:08:57 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:11.0 == *:*:*.* ]] 00:04:43.209 00:08:57 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:04:43.209 00:08:57 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:04:43.209 00:08:57 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:04:43.209 00:08:57 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:04:43.209 00:08:57 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:43.209 00:08:57 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:12.0 == *:*:*.* ]] 00:04:43.209 00:08:57 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:04:43.209 00:08:57 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\2\.\0* ]] 00:04:43.209 00:08:57 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:04:43.209 00:08:57 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:04:43.209 00:08:57 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:43.209 00:08:57 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:13.0 == *:*:*.* ]] 00:04:43.209 00:08:57 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:04:43.209 00:08:57 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\3\.\0* ]] 00:04:43.209 00:08:57 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:04:43.209 00:08:57 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:04:43.209 00:08:57 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:43.209 00:08:57 setup.sh.acl -- setup/acl.sh@24 -- # (( 4 > 0 )) 00:04:43.209 00:08:57 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:04:43.209 00:08:57 setup.sh.acl -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:43.209 00:08:57 setup.sh.acl -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:43.209 00:08:57 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:04:43.209 ************************************ 00:04:43.209 START TEST denied 00:04:43.209 ************************************ 00:04:43.209 00:08:57 setup.sh.acl.denied -- common/autotest_common.sh@1121 -- # denied 00:04:43.209 00:08:57 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:00:10.0' 00:04:43.209 00:08:57 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:00:10.0' 00:04:43.209 00:08:57 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:04:43.209 00:08:57 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:04:43.209 00:08:57 setup.sh.acl.denied -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:45.113 0000:00:10.0 (1b36 0010): Skipping denied controller at 0000:00:10.0 00:04:45.113 00:08:59 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:00:10.0 00:04:45.113 00:08:59 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:04:45.113 00:08:59 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:04:45.113 00:08:59 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:10.0 ]] 00:04:45.113 00:08:59 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:10.0/driver 00:04:45.113 00:08:59 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:04:45.113 00:08:59 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:04:45.113 00:08:59 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:04:45.113 00:08:59 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:45.113 00:08:59 setup.sh.acl.denied -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:51.679 00:04:51.679 real 0m7.857s 00:04:51.679 user 0m1.011s 00:04:51.679 sys 0m1.957s 00:04:51.679 00:09:05 setup.sh.acl.denied -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:51.679 00:09:05 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:04:51.679 ************************************ 00:04:51.679 END TEST denied 00:04:51.679 ************************************ 00:04:51.679 00:09:05 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:04:51.679 00:09:05 setup.sh.acl -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:51.679 00:09:05 setup.sh.acl -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:51.679 00:09:05 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:04:51.679 ************************************ 00:04:51.679 START TEST allowed 00:04:51.679 ************************************ 00:04:51.679 00:09:05 setup.sh.acl.allowed -- common/autotest_common.sh@1121 -- # allowed 00:04:51.679 00:09:05 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:00:10.0 00:04:51.679 00:09:05 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:04:51.679 00:09:05 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:00:10.0 .*: nvme -> .*' 00:04:51.679 00:09:05 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:04:51.679 00:09:05 setup.sh.acl.allowed -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:52.612 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:52.612 00:09:07 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:04:52.612 00:09:07 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:04:52.612 00:09:07 setup.sh.acl.allowed -- setup/acl.sh@30 -- # for dev in "$@" 00:04:52.612 00:09:07 setup.sh.acl.allowed -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:11.0 ]] 00:04:52.612 00:09:07 setup.sh.acl.allowed -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:11.0/driver 00:04:52.612 00:09:07 setup.sh.acl.allowed -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:04:52.612 00:09:07 setup.sh.acl.allowed -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:04:52.612 00:09:07 setup.sh.acl.allowed -- setup/acl.sh@30 -- # for dev in "$@" 00:04:52.612 00:09:07 setup.sh.acl.allowed -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:12.0 ]] 00:04:52.612 00:09:07 setup.sh.acl.allowed -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:12.0/driver 00:04:52.613 00:09:07 setup.sh.acl.allowed -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:04:52.613 00:09:07 setup.sh.acl.allowed -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:04:52.613 00:09:07 setup.sh.acl.allowed -- setup/acl.sh@30 -- # for dev in "$@" 00:04:52.613 00:09:07 setup.sh.acl.allowed -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:13.0 ]] 00:04:52.613 00:09:07 setup.sh.acl.allowed -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:13.0/driver 00:04:52.613 00:09:07 setup.sh.acl.allowed -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:04:52.613 00:09:07 setup.sh.acl.allowed -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:04:52.613 00:09:07 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:04:52.613 00:09:07 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:52.613 00:09:07 setup.sh.acl.allowed -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:54.012 00:04:54.012 real 0m2.886s 00:04:54.012 user 0m1.139s 00:04:54.012 sys 0m1.777s 00:04:54.012 00:09:08 setup.sh.acl.allowed -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:54.012 00:09:08 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:04:54.012 ************************************ 00:04:54.012 END TEST allowed 00:04:54.012 ************************************ 00:04:54.270 00:04:54.270 real 0m14.304s 00:04:54.270 user 0m3.620s 00:04:54.270 sys 0m5.853s 00:04:54.270 00:09:08 setup.sh.acl -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:54.270 00:09:08 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:04:54.270 ************************************ 00:04:54.270 END TEST acl 00:04:54.270 ************************************ 00:04:54.270 00:09:08 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:04:54.270 00:09:08 setup.sh -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:54.270 00:09:08 setup.sh -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:54.270 00:09:08 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:54.270 ************************************ 00:04:54.270 START TEST hugepages 00:04:54.270 ************************************ 00:04:54.270 00:09:08 setup.sh.hugepages -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:04:54.270 * Looking for test storage... 00:04:54.270 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:54.270 00:09:08 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:04:54.270 00:09:08 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:04:54.270 00:09:08 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:04:54.270 00:09:08 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:04:54.270 00:09:08 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:04:54.270 00:09:08 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:04:54.270 00:09:08 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:04:54.270 00:09:08 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:04:54.270 00:09:08 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:04:54.270 00:09:08 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:04:54.270 00:09:08 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:54.270 00:09:08 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:54.270 00:09:08 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:54.270 00:09:08 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:04:54.270 00:09:08 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:54.270 00:09:08 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:54.270 00:09:08 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:54.270 00:09:08 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 4696104 kB' 'MemAvailable: 7384800 kB' 'Buffers: 2436 kB' 'Cached: 2892672 kB' 'SwapCached: 0 kB' 'Active: 448396 kB' 'Inactive: 2552568 kB' 'Active(anon): 116372 kB' 'Inactive(anon): 0 kB' 'Active(file): 332024 kB' 'Inactive(file): 2552568 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 260 kB' 'Writeback: 0 kB' 'AnonPages: 107432 kB' 'Mapped: 48808 kB' 'Shmem: 10516 kB' 'KReclaimable: 82084 kB' 'Slab: 167176 kB' 'SReclaimable: 82084 kB' 'SUnreclaim: 85092 kB' 'KernelStack: 6508 kB' 'PageTables: 3804 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 12412440 kB' 'Committed_AS: 330616 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55204 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 178028 kB' 'DirectMap2M: 6113280 kB' 'DirectMap1G: 8388608 kB' 00:04:54.529 00:09:08 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:54.529 00:09:08 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:54.529 00:09:08 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:54.529 00:09:08 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:54.529 00:09:08 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:54.529 00:09:08 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:54.529 00:09:08 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:54.529 00:09:08 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:54.529 00:09:08 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:54.529 00:09:08 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:54.529 00:09:08 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:54.529 00:09:08 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:54.529 00:09:08 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:54.529 00:09:08 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:54.529 00:09:08 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:54.529 00:09:08 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:54.529 00:09:08 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:54.529 00:09:08 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:54.529 00:09:08 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:54.529 00:09:08 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:54.529 00:09:08 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:54.529 00:09:08 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:54.529 00:09:08 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:54.529 00:09:08 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:54.529 00:09:08 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:54.529 00:09:08 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:54.529 00:09:08 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:54.529 00:09:08 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:54.529 00:09:08 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:54.529 00:09:08 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:54.529 00:09:08 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:54.529 00:09:08 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:54.529 00:09:08 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:54.529 00:09:08 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:54.529 00:09:08 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:54.529 00:09:08 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:54.529 00:09:08 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:54.529 00:09:08 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:54.529 00:09:08 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:54.529 00:09:08 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:54.529 00:09:08 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:54.529 00:09:08 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:54.529 00:09:08 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:54.529 00:09:08 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:54.529 00:09:08 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:54.529 00:09:08 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:54.529 00:09:08 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:54.529 00:09:08 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:54.529 00:09:08 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:54.529 00:09:08 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:54.529 00:09:08 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:54.529 00:09:08 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:54.529 00:09:08 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:54.529 00:09:08 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:54.529 00:09:08 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:54.529 00:09:08 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:54.529 00:09:08 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:54.529 00:09:08 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:54.529 00:09:08 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:54.529 00:09:08 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:54.529 00:09:08 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:54.529 00:09:08 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:54.529 00:09:08 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:54.529 00:09:08 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:54.530 00:09:08 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:54.530 00:09:08 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:54.530 00:09:08 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:54.530 00:09:08 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:54.530 00:09:08 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:54.530 00:09:08 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:54.530 00:09:08 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:54.530 00:09:08 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:54.530 00:09:08 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:54.530 00:09:08 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:54.530 00:09:08 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:54.530 00:09:08 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:54.530 00:09:08 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:54.530 00:09:08 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:54.530 00:09:08 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:54.530 00:09:08 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:54.530 00:09:08 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:54.530 00:09:08 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:54.530 00:09:08 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:54.530 00:09:08 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:54.530 00:09:08 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:54.530 00:09:08 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:54.530 00:09:08 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:54.530 00:09:08 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:54.530 00:09:08 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:54.530 00:09:08 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:54.530 00:09:08 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:54.530 00:09:08 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:54.530 00:09:08 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:54.530 00:09:08 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:54.530 00:09:08 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:54.530 00:09:08 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:54.530 00:09:08 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:54.530 00:09:08 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:54.530 00:09:08 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:54.530 00:09:08 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:54.530 00:09:08 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:54.530 00:09:08 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:54.530 00:09:08 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:54.530 00:09:08 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:54.530 00:09:08 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:54.530 00:09:08 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:54.530 00:09:08 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:54.530 00:09:08 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:54.530 00:09:08 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:54.530 00:09:08 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:54.530 00:09:08 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:54.530 00:09:08 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:54.530 00:09:08 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:54.530 00:09:08 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:54.530 00:09:08 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:54.530 00:09:08 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:54.530 00:09:08 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:54.530 00:09:08 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:54.530 00:09:08 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:54.530 00:09:08 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:54.530 00:09:08 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:54.530 00:09:08 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:54.530 00:09:08 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:54.530 00:09:08 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:54.530 00:09:08 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:54.530 00:09:08 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:54.530 00:09:08 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:54.530 00:09:08 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:54.530 00:09:08 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:54.530 00:09:08 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:54.530 00:09:08 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:54.530 00:09:08 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:54.530 00:09:08 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:54.530 00:09:08 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:54.530 00:09:08 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:54.530 00:09:08 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:54.530 00:09:08 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:54.530 00:09:08 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:54.530 00:09:08 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:54.530 00:09:08 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:54.530 00:09:08 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:54.530 00:09:08 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:54.530 00:09:08 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:54.530 00:09:08 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:54.530 00:09:08 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:54.530 00:09:08 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:54.530 00:09:08 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:54.530 00:09:08 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:54.530 00:09:08 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:54.530 00:09:08 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:54.530 00:09:08 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:54.530 00:09:08 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:54.530 00:09:08 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:54.530 00:09:08 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:54.530 00:09:08 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:54.530 00:09:08 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:54.530 00:09:08 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:54.530 00:09:08 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:54.530 00:09:08 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:54.530 00:09:08 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:54.530 00:09:08 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:54.530 00:09:08 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:54.530 00:09:08 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:54.530 00:09:08 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:54.530 00:09:08 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:54.530 00:09:08 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:54.530 00:09:08 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:54.530 00:09:08 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:54.530 00:09:08 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:54.530 00:09:08 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:54.530 00:09:08 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:54.530 00:09:08 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:54.530 00:09:08 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:54.530 00:09:08 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:54.530 00:09:08 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:54.530 00:09:08 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:54.530 00:09:08 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:54.530 00:09:08 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:54.530 00:09:08 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:54.530 00:09:08 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:54.530 00:09:08 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:54.530 00:09:08 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:54.530 00:09:08 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:54.530 00:09:08 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:54.530 00:09:08 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:54.530 00:09:08 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:54.530 00:09:08 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:54.530 00:09:08 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:54.530 00:09:08 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:54.530 00:09:08 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:54.530 00:09:08 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:54.530 00:09:08 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:54.530 00:09:08 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:54.530 00:09:08 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:54.530 00:09:08 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:54.530 00:09:08 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:54.531 00:09:08 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:54.531 00:09:08 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:54.531 00:09:08 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:54.531 00:09:08 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:54.531 00:09:08 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:54.531 00:09:08 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:54.531 00:09:08 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:54.531 00:09:08 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:54.531 00:09:08 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:54.531 00:09:08 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:54.531 00:09:08 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:54.531 00:09:08 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:54.531 00:09:08 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:54.531 00:09:08 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:04:54.531 00:09:08 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:04:54.531 00:09:08 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:04:54.531 00:09:08 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:04:54.531 00:09:08 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:04:54.531 00:09:08 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:04:54.531 00:09:08 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:04:54.531 00:09:08 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:04:54.531 00:09:08 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:04:54.531 00:09:08 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:04:54.531 00:09:08 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:04:54.531 00:09:08 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:54.531 00:09:08 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:04:54.531 00:09:08 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:54.531 00:09:08 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:54.531 00:09:08 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:04:54.531 00:09:08 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:04:54.531 00:09:08 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:54.531 00:09:08 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:54.531 00:09:08 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:54.531 00:09:08 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:54.531 00:09:08 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:54.531 00:09:09 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:54.531 00:09:09 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:54.531 00:09:09 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:04:54.531 00:09:09 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:54.531 00:09:09 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:54.531 00:09:09 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:54.531 ************************************ 00:04:54.531 START TEST default_setup 00:04:54.531 ************************************ 00:04:54.531 00:09:09 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1121 -- # default_setup 00:04:54.531 00:09:09 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:04:54.531 00:09:09 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:04:54.531 00:09:09 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:54.531 00:09:09 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:04:54.531 00:09:09 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:54.531 00:09:09 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:04:54.531 00:09:09 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:54.531 00:09:09 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:54.531 00:09:09 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:54.531 00:09:09 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:54.531 00:09:09 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:04:54.531 00:09:09 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:54.531 00:09:09 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:54.531 00:09:09 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:54.531 00:09:09 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:54.531 00:09:09 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:54.531 00:09:09 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:54.531 00:09:09 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:54.531 00:09:09 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:04:54.531 00:09:09 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:04:54.531 00:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:04:54.531 00:09:09 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:55.097 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:56.033 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:04:56.033 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:56.033 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:56.033 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:04:56.033 00:09:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:04:56.033 00:09:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:04:56.033 00:09:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:04:56.033 00:09:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:04:56.033 00:09:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:04:56.033 00:09:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:04:56.033 00:09:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:04:56.033 00:09:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:56.033 00:09:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:56.033 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:56.033 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:56.033 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:56.033 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:56.033 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:56.033 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:56.033 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:56.033 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:56.033 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:56.033 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.033 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.033 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 6811644 kB' 'MemAvailable: 9500060 kB' 'Buffers: 2436 kB' 'Cached: 2892660 kB' 'SwapCached: 0 kB' 'Active: 462224 kB' 'Inactive: 2552588 kB' 'Active(anon): 130200 kB' 'Inactive(anon): 0 kB' 'Active(file): 332024 kB' 'Inactive(file): 2552588 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 276 kB' 'Writeback: 0 kB' 'AnonPages: 121296 kB' 'Mapped: 48944 kB' 'Shmem: 10476 kB' 'KReclaimable: 81488 kB' 'Slab: 166496 kB' 'SReclaimable: 81488 kB' 'SUnreclaim: 85008 kB' 'KernelStack: 6512 kB' 'PageTables: 4016 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 348392 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55236 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 178028 kB' 'DirectMap2M: 6113280 kB' 'DirectMap1G: 8388608 kB' 00:04:56.033 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.033 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.033 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.033 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.033 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.033 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.033 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.034 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.034 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.034 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.034 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.034 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.034 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.034 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.034 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.034 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.034 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.034 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.034 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.034 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.034 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.034 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.034 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.034 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.034 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.034 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.034 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.034 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.034 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.034 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.034 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.034 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.034 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.034 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.034 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.034 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.034 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.034 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.034 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.034 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.034 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.034 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.034 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.034 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.034 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.034 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.034 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.034 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.034 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.034 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.034 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.034 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.034 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.034 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.034 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.034 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.034 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.034 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.034 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.034 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.034 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.034 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.034 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.034 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.034 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.034 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.034 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.034 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.034 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.034 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.034 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.034 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.034 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.034 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.034 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.034 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.034 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.034 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.034 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.034 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.034 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.034 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.034 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.034 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.034 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.034 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.034 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.034 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.034 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.034 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.034 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.034 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.034 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.034 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.034 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.034 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.034 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.034 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.034 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.034 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.034 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.034 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.034 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.034 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.034 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.034 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.034 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.034 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.034 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.034 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.034 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.034 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.034 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.034 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.034 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.034 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.034 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.034 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.034 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.034 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.034 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.034 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.034 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.034 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.034 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.034 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.034 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.035 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.035 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.035 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.035 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.035 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.035 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.035 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.035 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.035 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.035 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.035 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.035 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.035 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.035 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.035 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.035 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.035 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.035 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.035 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.035 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.035 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.035 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.035 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.035 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.035 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.035 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.035 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.035 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.035 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.035 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.035 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.035 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.035 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.035 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.035 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:56.035 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:56.035 00:09:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:04:56.035 00:09:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:56.035 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:56.035 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:56.035 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:56.035 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:56.035 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:56.035 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:56.035 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:56.035 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:56.035 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:56.035 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.035 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.035 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 6811952 kB' 'MemAvailable: 9500368 kB' 'Buffers: 2436 kB' 'Cached: 2892660 kB' 'SwapCached: 0 kB' 'Active: 461820 kB' 'Inactive: 2552588 kB' 'Active(anon): 129796 kB' 'Inactive(anon): 0 kB' 'Active(file): 332024 kB' 'Inactive(file): 2552588 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 276 kB' 'Writeback: 0 kB' 'AnonPages: 120916 kB' 'Mapped: 48808 kB' 'Shmem: 10476 kB' 'KReclaimable: 81488 kB' 'Slab: 166508 kB' 'SReclaimable: 81488 kB' 'SUnreclaim: 85020 kB' 'KernelStack: 6544 kB' 'PageTables: 4100 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 348392 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55220 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 178028 kB' 'DirectMap2M: 6113280 kB' 'DirectMap1G: 8388608 kB' 00:04:56.035 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.035 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.035 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.035 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.035 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.035 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.035 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.035 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.035 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.035 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.035 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.035 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.035 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.035 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.035 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.035 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.035 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.035 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.035 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.035 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.035 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.035 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.035 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.035 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.035 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.035 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.035 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.035 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.035 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.035 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.035 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.035 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.035 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.035 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.035 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.035 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.035 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.035 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.035 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.035 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.035 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.035 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.035 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.035 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.035 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.035 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.035 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.035 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.035 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.035 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.035 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.035 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.035 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.035 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.035 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.036 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.036 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.036 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.036 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.036 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.036 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.036 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.036 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.036 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.036 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.036 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.036 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.036 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.298 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.298 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.298 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.298 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.298 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.299 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.299 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.299 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.299 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.299 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.299 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.299 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.299 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.299 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.299 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.299 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.299 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.299 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.299 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.299 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.299 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.299 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.299 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.299 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.299 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.299 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.299 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.299 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.299 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.299 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.299 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.299 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.299 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.299 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.299 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.299 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.299 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.299 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.299 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.299 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.299 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.299 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.299 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.299 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.299 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.299 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.299 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.299 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.299 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.299 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.299 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.299 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.299 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.299 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.299 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.299 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.299 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.299 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.299 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.299 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.299 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.299 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.299 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.299 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.299 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.299 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.299 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.299 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.299 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.299 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.299 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.299 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.299 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.299 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.299 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.299 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.299 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.299 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.299 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.299 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.299 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.299 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.299 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.299 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.299 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.299 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.299 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.299 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.299 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.299 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.299 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.299 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.299 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.299 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.299 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.299 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.299 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.299 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.299 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.299 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.299 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.299 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.299 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.299 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.299 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.299 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.299 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.299 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.299 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.299 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.299 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.299 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.299 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.299 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.299 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.299 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.299 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.299 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.299 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.300 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.300 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.300 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.300 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.300 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.300 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.300 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.300 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.300 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.300 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.300 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.300 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.300 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.300 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.300 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.300 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.300 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.300 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.300 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:56.300 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:56.300 00:09:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:04:56.300 00:09:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:56.300 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:56.300 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:56.300 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:56.300 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:56.300 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:56.300 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:56.300 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:56.300 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:56.300 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:56.300 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.300 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.300 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 6811448 kB' 'MemAvailable: 9499864 kB' 'Buffers: 2436 kB' 'Cached: 2892660 kB' 'SwapCached: 0 kB' 'Active: 462008 kB' 'Inactive: 2552588 kB' 'Active(anon): 129984 kB' 'Inactive(anon): 0 kB' 'Active(file): 332024 kB' 'Inactive(file): 2552588 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 276 kB' 'Writeback: 0 kB' 'AnonPages: 121108 kB' 'Mapped: 48808 kB' 'Shmem: 10476 kB' 'KReclaimable: 81488 kB' 'Slab: 166508 kB' 'SReclaimable: 81488 kB' 'SUnreclaim: 85020 kB' 'KernelStack: 6528 kB' 'PageTables: 4048 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 348392 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55220 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 178028 kB' 'DirectMap2M: 6113280 kB' 'DirectMap1G: 8388608 kB' 00:04:56.300 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.300 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.300 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.300 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.300 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.300 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.300 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.300 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.300 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.300 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.300 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.300 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.300 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.300 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.300 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.300 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.300 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.300 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.300 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.300 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.300 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.300 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.300 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.300 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.300 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.300 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.300 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.300 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.300 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.300 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.300 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.300 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.300 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.300 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.300 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.300 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.300 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.300 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.300 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.300 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.300 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.300 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.300 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.300 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.300 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.300 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.300 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.300 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.300 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.300 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.300 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.300 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.300 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.300 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.300 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.300 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.300 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.300 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.300 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.300 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.300 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.300 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.300 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.300 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.300 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.300 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.300 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.300 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.300 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.300 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.300 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.300 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.300 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.301 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.301 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.301 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.301 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.301 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.301 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.301 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.301 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.301 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.301 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.301 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.301 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.301 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.301 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.301 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.301 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.301 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.301 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.301 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.301 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.301 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.301 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.301 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.301 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.301 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.301 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.301 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.301 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.301 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.301 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.301 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.301 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.301 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.301 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.301 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.301 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.301 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.301 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.301 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.301 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.301 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.301 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.301 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.301 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.301 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.301 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.301 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.301 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.301 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.301 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.301 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.301 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.301 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.301 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.301 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.301 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.301 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.301 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.301 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.301 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.301 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.301 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.301 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.301 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.301 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.301 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.301 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.301 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.301 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.301 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.301 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.301 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.301 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.301 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.301 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.301 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.301 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.301 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.301 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.301 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.301 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.301 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.301 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.301 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.301 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.301 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.301 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.301 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.301 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.301 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.301 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.301 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.301 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.301 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.301 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.301 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.301 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.301 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.301 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.301 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.301 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.301 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.301 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.301 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.301 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.301 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.301 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.301 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.301 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.301 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.301 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.301 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.301 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.301 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.301 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.301 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.301 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.301 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.301 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.302 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.302 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.302 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.302 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.302 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.302 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.302 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.302 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.302 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.302 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:56.302 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:56.302 nr_hugepages=1024 00:04:56.302 resv_hugepages=0 00:04:56.302 surplus_hugepages=0 00:04:56.302 anon_hugepages=0 00:04:56.302 00:09:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:04:56.302 00:09:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:56.302 00:09:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:56.302 00:09:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:56.302 00:09:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:56.302 00:09:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:56.302 00:09:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:56.302 00:09:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:56.302 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:56.302 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:56.302 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:56.302 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:56.302 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:56.302 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:56.302 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:56.302 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:56.302 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:56.302 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.302 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.302 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 6811968 kB' 'MemAvailable: 9500384 kB' 'Buffers: 2436 kB' 'Cached: 2892660 kB' 'SwapCached: 0 kB' 'Active: 461816 kB' 'Inactive: 2552588 kB' 'Active(anon): 129792 kB' 'Inactive(anon): 0 kB' 'Active(file): 332024 kB' 'Inactive(file): 2552588 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 276 kB' 'Writeback: 0 kB' 'AnonPages: 120916 kB' 'Mapped: 48808 kB' 'Shmem: 10476 kB' 'KReclaimable: 81488 kB' 'Slab: 166504 kB' 'SReclaimable: 81488 kB' 'SUnreclaim: 85016 kB' 'KernelStack: 6528 kB' 'PageTables: 4052 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 348392 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55204 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 178028 kB' 'DirectMap2M: 6113280 kB' 'DirectMap1G: 8388608 kB' 00:04:56.302 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.302 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.302 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.302 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.302 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.302 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.302 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.302 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.302 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.302 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.302 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.302 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.302 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.302 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.302 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.302 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.302 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.302 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.302 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.302 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.302 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.302 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.302 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.302 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.302 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.302 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.302 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.302 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.302 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.302 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.302 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.302 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.302 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.302 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.302 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.302 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.302 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.302 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.302 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.302 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.302 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.302 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.302 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.302 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.302 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.302 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.302 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.302 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.302 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.302 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.302 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.302 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.302 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.302 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.302 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.302 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.302 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.302 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.302 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.302 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.302 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.302 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.302 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.302 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.302 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.302 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.302 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.302 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.302 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.302 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.302 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.302 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.303 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.303 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.303 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.303 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.303 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.303 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.303 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.303 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.303 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.303 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.303 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.303 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.303 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.303 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.303 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.303 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.303 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.303 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.303 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.303 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.303 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.303 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.303 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.303 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.303 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.303 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.303 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.303 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.303 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.303 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.303 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.303 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.303 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.303 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.303 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.303 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.303 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.303 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.303 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.303 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.303 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.303 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.303 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.303 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.303 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.303 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.303 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.303 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.303 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.303 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.303 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.303 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.303 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.303 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.303 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.303 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.303 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.303 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.303 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.303 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.303 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.303 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.303 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.303 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.303 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.303 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.303 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.303 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.303 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.303 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.303 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.303 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.303 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.303 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.303 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.303 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.303 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.303 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.303 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.303 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.303 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.303 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.303 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.303 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.303 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.303 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.303 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.303 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.303 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.303 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.303 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.303 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.303 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.303 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.303 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.303 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.303 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.304 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.304 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.304 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.304 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.304 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.304 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.304 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.304 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.304 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.304 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.304 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.304 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.304 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.304 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.304 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.304 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.304 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.304 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.304 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.304 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.304 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.304 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.304 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.304 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.304 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:04:56.304 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:56.304 00:09:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:56.304 00:09:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:04:56.304 00:09:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:04:56.304 00:09:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:56.304 00:09:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:56.304 00:09:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:56.304 00:09:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:56.304 00:09:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:56.304 00:09:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:56.304 00:09:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:56.304 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:56.304 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:04:56.304 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:56.304 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:56.304 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:56.304 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:56.304 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:56.304 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:56.304 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:56.304 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.304 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.304 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 6811752 kB' 'MemUsed: 5430228 kB' 'SwapCached: 0 kB' 'Active: 461872 kB' 'Inactive: 2552592 kB' 'Active(anon): 129848 kB' 'Inactive(anon): 0 kB' 'Active(file): 332024 kB' 'Inactive(file): 2552592 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 276 kB' 'Writeback: 0 kB' 'FilePages: 2895096 kB' 'Mapped: 48808 kB' 'AnonPages: 120992 kB' 'Shmem: 10476 kB' 'KernelStack: 6528 kB' 'PageTables: 4056 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 81488 kB' 'Slab: 166504 kB' 'SReclaimable: 81488 kB' 'SUnreclaim: 85016 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:56.304 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.304 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.304 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.304 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.304 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.304 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.304 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.304 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.304 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.304 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.304 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.304 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.304 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.304 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.304 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.304 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.304 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.304 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.304 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.304 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.304 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.304 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.304 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.304 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.304 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.304 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.304 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.304 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.304 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.304 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.304 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.304 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.304 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.304 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.304 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.304 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.304 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.304 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.304 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.304 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.304 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.304 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.304 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.304 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.304 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.304 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.304 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.304 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.304 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.304 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.304 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.304 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.304 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.304 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.304 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.304 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.304 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.304 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.304 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.304 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.304 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.304 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.305 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.305 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.305 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.305 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.305 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.305 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.305 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.305 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.305 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.305 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.305 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.305 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.305 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.305 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.305 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.305 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.305 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.305 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.305 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.305 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.305 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.305 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.305 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.305 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.305 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.305 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.305 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.305 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.305 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.305 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.305 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.305 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.305 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.305 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.305 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.305 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.305 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.305 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.305 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.305 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.305 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.305 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.305 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.305 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.305 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.305 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.305 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.305 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.305 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.305 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.305 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.305 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.305 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.305 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.305 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.305 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.305 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.305 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.305 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.305 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.305 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.305 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.305 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.305 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.305 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.305 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.305 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.305 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.305 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.305 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.305 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.305 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.305 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.305 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.305 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.305 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.305 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.305 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.305 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.305 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:56.305 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:56.305 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:56.305 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.305 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:56.305 00:09:10 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:56.305 00:09:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:56.305 00:09:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:56.305 00:09:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:56.305 00:09:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:56.305 node0=1024 expecting 1024 00:04:56.305 00:09:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:56.305 00:09:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:56.305 ************************************ 00:04:56.305 END TEST default_setup 00:04:56.305 ************************************ 00:04:56.305 00:04:56.305 real 0m1.805s 00:04:56.305 user 0m0.676s 00:04:56.305 sys 0m1.120s 00:04:56.305 00:09:10 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:56.305 00:09:10 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:04:56.305 00:09:10 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:04:56.305 00:09:10 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:56.305 00:09:10 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:56.305 00:09:10 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:56.305 ************************************ 00:04:56.305 START TEST per_node_1G_alloc 00:04:56.305 ************************************ 00:04:56.305 00:09:10 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1121 -- # per_node_1G_alloc 00:04:56.305 00:09:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:04:56.305 00:09:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 00:04:56.305 00:09:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:04:56.305 00:09:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:56.305 00:09:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:04:56.305 00:09:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:56.305 00:09:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:04:56.305 00:09:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:56.305 00:09:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:56.305 00:09:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:56.305 00:09:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:56.305 00:09:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:56.305 00:09:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:56.305 00:09:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:56.306 00:09:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:56.306 00:09:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:56.306 00:09:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:56.306 00:09:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:56.306 00:09:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:04:56.306 00:09:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:04:56.306 00:09:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:04:56.306 00:09:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0 00:04:56.306 00:09:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:04:56.306 00:09:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:56.306 00:09:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:56.871 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:57.132 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:57.132 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:57.132 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:57.132 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:57.132 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=512 00:04:57.132 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:04:57.132 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:04:57.132 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:57.132 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:57.132 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:57.132 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:57.132 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:57.132 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:57.132 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:57.132 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:57.132 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:57.132 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:57.132 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:57.132 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:57.132 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:57.132 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:57.132 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:57.132 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:57.132 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.132 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.132 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7859404 kB' 'MemAvailable: 10547836 kB' 'Buffers: 2436 kB' 'Cached: 2892660 kB' 'SwapCached: 0 kB' 'Active: 462084 kB' 'Inactive: 2552596 kB' 'Active(anon): 130060 kB' 'Inactive(anon): 0 kB' 'Active(file): 332024 kB' 'Inactive(file): 2552596 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'AnonPages: 121116 kB' 'Mapped: 48872 kB' 'Shmem: 10476 kB' 'KReclaimable: 81504 kB' 'Slab: 166524 kB' 'SReclaimable: 81504 kB' 'SUnreclaim: 85020 kB' 'KernelStack: 6564 kB' 'PageTables: 4000 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 348392 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55268 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 178028 kB' 'DirectMap2M: 6113280 kB' 'DirectMap1G: 8388608 kB' 00:04:57.132 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.132 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.132 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.132 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.132 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.132 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.132 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.132 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.132 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.132 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.132 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.132 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.132 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.132 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.132 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.132 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.132 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.132 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.132 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.132 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.132 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.132 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.132 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.132 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.132 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.132 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.132 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.132 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.132 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.132 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.132 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.132 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.132 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.132 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.132 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.132 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.132 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.132 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.132 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.132 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.132 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.132 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.132 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.132 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.132 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.132 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.132 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.132 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.132 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.132 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.132 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.132 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.132 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.132 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.132 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.132 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.132 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.132 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.132 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.132 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.133 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.133 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.133 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.133 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.133 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.133 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.133 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.133 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.133 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.133 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.133 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.133 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.133 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.133 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.133 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.133 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.133 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.133 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.133 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.133 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.133 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.133 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.133 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.133 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.133 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.133 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.133 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.133 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.133 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.133 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.133 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.133 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.133 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.133 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.133 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.133 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.133 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.133 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.133 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.133 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.133 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.133 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.133 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.133 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.133 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.133 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.133 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.133 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.133 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.133 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.133 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.133 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.133 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.133 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.133 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.133 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.133 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.133 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.133 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.133 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.133 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.133 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.133 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.133 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.133 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.133 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.133 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.133 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.133 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.133 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.133 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.133 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.133 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.133 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.133 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.133 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.133 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.133 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.133 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.133 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.133 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.133 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.133 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.133 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.133 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.133 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.133 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.133 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.133 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.133 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.133 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.133 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.133 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.133 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.133 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.133 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.133 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.133 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.133 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.133 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.133 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.133 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:57.133 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:57.133 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:57.133 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:57.133 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:57.133 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:57.133 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:57.133 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:57.133 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:57.133 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:57.133 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:57.133 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:57.133 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:57.134 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.134 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.134 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7859424 kB' 'MemAvailable: 10547856 kB' 'Buffers: 2436 kB' 'Cached: 2892660 kB' 'SwapCached: 0 kB' 'Active: 461916 kB' 'Inactive: 2552596 kB' 'Active(anon): 129892 kB' 'Inactive(anon): 0 kB' 'Active(file): 332024 kB' 'Inactive(file): 2552596 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'AnonPages: 121080 kB' 'Mapped: 48808 kB' 'Shmem: 10476 kB' 'KReclaimable: 81504 kB' 'Slab: 166512 kB' 'SReclaimable: 81504 kB' 'SUnreclaim: 85008 kB' 'KernelStack: 6544 kB' 'PageTables: 4104 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 348392 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55252 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 178028 kB' 'DirectMap2M: 6113280 kB' 'DirectMap1G: 8388608 kB' 00:04:57.134 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.134 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.134 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.134 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.134 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.134 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.134 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.134 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.134 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.134 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.134 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.134 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.134 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.134 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.134 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.134 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.134 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.134 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.134 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.134 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.134 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.134 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.134 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.134 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.134 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.134 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.134 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.134 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.134 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.134 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.134 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.134 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.134 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.134 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.134 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.134 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.134 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.134 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.134 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.134 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.134 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.134 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.134 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.134 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.134 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.134 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.134 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.134 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.134 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.134 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.134 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.134 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.134 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.134 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.134 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.134 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.134 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.134 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.134 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.134 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.134 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.134 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.134 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.134 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.134 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.134 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.397 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.397 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.397 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.397 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.397 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.397 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.397 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.397 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.397 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.397 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.397 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.397 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.397 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.397 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.397 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.397 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.397 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.397 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.397 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.397 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.397 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.397 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.397 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.397 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.397 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.397 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.397 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.397 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.397 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.397 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.397 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.397 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.397 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.397 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.397 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.397 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.397 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.397 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.397 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.397 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.397 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.398 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.398 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.398 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.398 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.398 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.398 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.398 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.398 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.398 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.398 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.398 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.398 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.398 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.398 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.398 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.398 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.398 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.398 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.398 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.398 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.398 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.398 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.398 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.398 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.398 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.398 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.398 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.398 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.398 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.398 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.398 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.398 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.398 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.398 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.398 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.398 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.398 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.398 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.398 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.398 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.398 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.398 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.398 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.398 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.398 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.398 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.398 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.398 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.398 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.398 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.398 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.398 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.398 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.398 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.398 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.398 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.398 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.398 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.398 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.398 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.398 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.398 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.398 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.398 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.398 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.398 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.398 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.398 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.398 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.398 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.398 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.398 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.398 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.398 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.398 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.398 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.398 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.398 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.398 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.398 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.398 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.398 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.398 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.398 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.398 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.398 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.398 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.398 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.398 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.398 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.398 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.398 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.398 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.398 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.398 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.398 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.398 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.398 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.398 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:57.398 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:57.398 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:57.398 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:57.398 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:57.398 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:57.398 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:57.398 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:57.398 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:57.398 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:57.398 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:57.398 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:57.398 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:57.399 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7860312 kB' 'MemAvailable: 10548744 kB' 'Buffers: 2436 kB' 'Cached: 2892660 kB' 'SwapCached: 0 kB' 'Active: 461620 kB' 'Inactive: 2552596 kB' 'Active(anon): 129596 kB' 'Inactive(anon): 0 kB' 'Active(file): 332024 kB' 'Inactive(file): 2552596 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'AnonPages: 120948 kB' 'Mapped: 48808 kB' 'Shmem: 10476 kB' 'KReclaimable: 81504 kB' 'Slab: 166512 kB' 'SReclaimable: 81504 kB' 'SUnreclaim: 85008 kB' 'KernelStack: 6528 kB' 'PageTables: 4044 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 348392 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55252 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 178028 kB' 'DirectMap2M: 6113280 kB' 'DirectMap1G: 8388608 kB' 00:04:57.399 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.399 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.399 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.399 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.399 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.399 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.399 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.399 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.399 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.399 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.399 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.399 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.399 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.399 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.399 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.399 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.399 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.399 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.399 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.399 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.399 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.399 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.399 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.399 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.399 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.399 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.399 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.399 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.399 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.399 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.399 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.399 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.399 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.399 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.399 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.399 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.399 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.399 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.399 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.399 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.399 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.399 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.399 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.399 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.399 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.399 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.399 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.399 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.399 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.399 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.399 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.399 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.399 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.399 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.399 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.399 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.399 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.399 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.399 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.399 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.399 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.399 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.399 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.399 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.399 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.399 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.399 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.399 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.399 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.399 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.399 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.399 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.399 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.399 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.399 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.399 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.399 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.399 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.399 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.399 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.399 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.399 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.399 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.399 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.399 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.399 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.399 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.399 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.399 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.399 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.399 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.399 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.399 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.399 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.399 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.399 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.399 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.399 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.399 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.399 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.399 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.399 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.399 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.399 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.399 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.399 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.399 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.399 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.400 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.400 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.400 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.400 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.400 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.400 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.400 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.400 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.400 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.400 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.400 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.400 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.400 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.400 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.400 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.400 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.400 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.400 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.400 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.400 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.400 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.400 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.400 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.400 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.400 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.400 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.400 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.400 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.400 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.400 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.400 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.400 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.400 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.400 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.400 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.400 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.400 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.400 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.400 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.400 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.400 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.400 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.400 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.400 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.400 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.400 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.400 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.400 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.400 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.400 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.400 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.400 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.400 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.400 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.400 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.400 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.400 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.400 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.400 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.400 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.400 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.400 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.400 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.400 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.400 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.400 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.400 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.400 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.400 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.400 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.400 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.400 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.400 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.400 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.400 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.400 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.400 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.400 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.400 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.400 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.400 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.400 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.400 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.400 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.400 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.400 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.400 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.400 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.400 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.400 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.400 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.400 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.400 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.400 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.400 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.400 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:57.400 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:57.400 nr_hugepages=512 00:04:57.400 resv_hugepages=0 00:04:57.400 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:57.400 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:04:57.400 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:57.400 surplus_hugepages=0 00:04:57.400 anon_hugepages=0 00:04:57.400 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:57.400 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:57.400 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:57.400 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:04:57.400 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:57.400 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:57.400 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:57.400 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:57.400 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:57.400 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:57.400 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:57.400 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:57.400 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:57.400 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:57.401 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.401 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.401 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7860060 kB' 'MemAvailable: 10548492 kB' 'Buffers: 2436 kB' 'Cached: 2892660 kB' 'SwapCached: 0 kB' 'Active: 461620 kB' 'Inactive: 2552596 kB' 'Active(anon): 129596 kB' 'Inactive(anon): 0 kB' 'Active(file): 332024 kB' 'Inactive(file): 2552596 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'AnonPages: 120948 kB' 'Mapped: 48808 kB' 'Shmem: 10476 kB' 'KReclaimable: 81504 kB' 'Slab: 166508 kB' 'SReclaimable: 81504 kB' 'SUnreclaim: 85004 kB' 'KernelStack: 6528 kB' 'PageTables: 4044 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 348392 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55268 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 178028 kB' 'DirectMap2M: 6113280 kB' 'DirectMap1G: 8388608 kB' 00:04:57.401 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.401 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.401 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.401 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.401 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.401 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.401 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.401 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.401 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.401 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.401 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.401 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.401 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.401 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.401 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.401 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.401 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.401 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.401 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.401 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.401 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.401 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.401 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.401 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.401 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.401 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.401 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.401 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.401 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.401 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.401 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.401 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.401 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.401 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.401 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.401 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.401 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.401 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.401 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.401 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.401 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.401 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.401 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.401 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.401 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.401 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.401 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.401 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.401 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.401 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.401 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.401 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.401 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.401 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.401 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.401 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.401 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.401 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.401 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.401 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.401 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.401 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.401 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.401 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.401 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.401 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.401 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.401 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.401 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.401 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.401 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.401 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.401 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.401 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.401 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.401 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.401 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.401 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.401 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.401 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.401 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.401 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.401 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.401 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.401 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.401 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.401 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.401 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.401 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.401 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.401 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.401 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.401 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.401 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.401 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.401 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.401 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.401 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.401 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.401 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.401 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.402 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.402 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.402 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.402 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.402 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.402 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.402 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.402 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.402 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.402 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.402 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.402 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.402 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.402 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.402 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.402 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.402 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.402 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.402 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.402 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.402 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.402 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.402 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.402 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.402 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.402 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.402 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.402 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.402 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.402 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.402 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.402 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.402 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.402 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.402 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.402 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.402 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.402 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.402 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.402 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.402 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.402 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.402 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.402 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.402 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.402 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.402 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.402 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.402 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.402 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.402 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.402 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.402 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.402 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.402 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.402 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.402 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.402 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.402 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.402 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.402 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.402 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.402 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.402 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.402 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.402 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.402 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.402 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.402 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.402 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.402 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.402 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.402 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.402 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.402 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.402 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.402 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.402 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.402 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.402 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.402 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.402 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.402 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.402 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.402 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.402 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.402 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.402 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.402 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.402 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.402 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.402 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.402 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 512 00:04:57.402 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:57.402 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:57.402 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:57.403 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:04:57.403 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:57.403 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:57.403 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:57.403 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:57.403 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:57.403 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:57.403 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:57.403 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:57.403 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:04:57.403 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:57.403 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:57.403 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:57.403 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:57.403 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:57.403 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:57.403 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:57.403 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.403 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.403 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7860580 kB' 'MemUsed: 4381400 kB' 'SwapCached: 0 kB' 'Active: 461856 kB' 'Inactive: 2552596 kB' 'Active(anon): 129832 kB' 'Inactive(anon): 0 kB' 'Active(file): 332024 kB' 'Inactive(file): 2552596 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'FilePages: 2895096 kB' 'Mapped: 48808 kB' 'AnonPages: 120964 kB' 'Shmem: 10476 kB' 'KernelStack: 6528 kB' 'PageTables: 4048 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 81504 kB' 'Slab: 166508 kB' 'SReclaimable: 81504 kB' 'SUnreclaim: 85004 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:57.403 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.403 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.403 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.403 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.403 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.403 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.403 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.403 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.403 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.403 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.403 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.403 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.403 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.403 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.403 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.403 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.403 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.403 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.403 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.403 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.403 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.403 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.403 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.403 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.403 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.403 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.403 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.403 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.403 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.403 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.403 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.403 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.403 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.403 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.403 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.403 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.403 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.403 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.403 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.403 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.403 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.403 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.403 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.403 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.403 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.403 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.403 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.403 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.403 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.403 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.403 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.403 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.403 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.403 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.403 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.403 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.403 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.403 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.403 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.403 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.403 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.403 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.403 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.403 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.403 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.403 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.403 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.403 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.403 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.403 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.403 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.403 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.403 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.403 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.403 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.403 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.403 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.403 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.403 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.403 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.403 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.403 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.403 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.403 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.403 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.403 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.403 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.403 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.404 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.404 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.404 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.404 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.404 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.404 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.404 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.404 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.404 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.404 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.404 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.404 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.404 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.404 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.404 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.404 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.404 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.404 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.404 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.404 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.404 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.404 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.404 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.404 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.404 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.404 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.404 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.404 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.404 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.404 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.404 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.404 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.404 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.404 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.404 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.404 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.404 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.404 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.404 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.404 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.404 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.404 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.404 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.404 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.404 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.404 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.404 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.404 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.404 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.404 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.404 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.404 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.404 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.404 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.404 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.404 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.404 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.404 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:57.404 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:57.404 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:57.404 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:57.404 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:57.404 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:57.404 node0=512 expecting 512 00:04:57.404 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:57.404 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:57.404 00:04:57.404 real 0m1.059s 00:04:57.404 user 0m0.428s 00:04:57.404 sys 0m0.665s 00:04:57.404 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:57.404 00:09:11 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:57.404 ************************************ 00:04:57.404 END TEST per_node_1G_alloc 00:04:57.404 ************************************ 00:04:57.404 00:09:12 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:04:57.404 00:09:12 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:57.404 00:09:12 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:57.404 00:09:12 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:57.404 ************************************ 00:04:57.404 START TEST even_2G_alloc 00:04:57.404 ************************************ 00:04:57.404 00:09:12 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1121 -- # even_2G_alloc 00:04:57.404 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:04:57.404 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:04:57.404 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:57.404 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:57.404 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:57.404 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:57.404 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:57.404 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:57.404 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:57.404 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:57.404 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:57.404 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:57.404 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:57.404 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:57.404 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:57.404 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1024 00:04:57.404 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:57.404 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:57.404 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:57.404 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:04:57.404 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:04:57.404 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:04:57.404 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:57.404 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:57.971 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:58.233 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:58.233 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:58.233 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:58.233 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:58.233 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:04:58.233 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:04:58.233 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:58.233 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:58.233 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:58.233 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:58.233 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:58.233 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:58.233 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:58.233 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:58.233 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:58.233 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:58.233 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:58.233 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:58.233 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:58.233 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:58.233 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:58.233 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:58.233 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.233 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.233 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 6815344 kB' 'MemAvailable: 9503776 kB' 'Buffers: 2436 kB' 'Cached: 2892660 kB' 'SwapCached: 0 kB' 'Active: 462236 kB' 'Inactive: 2552596 kB' 'Active(anon): 130212 kB' 'Inactive(anon): 0 kB' 'Active(file): 332024 kB' 'Inactive(file): 2552596 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'AnonPages: 121304 kB' 'Mapped: 48860 kB' 'Shmem: 10476 kB' 'KReclaimable: 81504 kB' 'Slab: 166500 kB' 'SReclaimable: 81504 kB' 'SUnreclaim: 84996 kB' 'KernelStack: 6516 kB' 'PageTables: 4080 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 348392 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55284 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 178028 kB' 'DirectMap2M: 6113280 kB' 'DirectMap1G: 8388608 kB' 00:04:58.233 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.233 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.233 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.233 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.233 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.233 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.233 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.233 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.233 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.233 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.233 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.233 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.233 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.233 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.233 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.233 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.233 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.233 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.233 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.233 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.233 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.233 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.233 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.233 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.233 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.233 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.233 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.233 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.233 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.233 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.233 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.233 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.233 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.233 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.233 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.233 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.233 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.233 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.233 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.233 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.233 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.233 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.233 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.233 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.233 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.233 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.233 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.233 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.233 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.233 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.233 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.233 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.233 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.233 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.233 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.233 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.233 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.233 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.233 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.233 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.233 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.233 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.233 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.233 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.233 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.233 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.233 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.233 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.233 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.233 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.233 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.234 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.234 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.234 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.234 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.234 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.234 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.234 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.234 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.234 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.234 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.234 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.234 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.234 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.234 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.234 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.234 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.234 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.234 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.234 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.234 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.234 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.234 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.234 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.234 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.234 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.234 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.234 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.234 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.234 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.234 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.234 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.234 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.234 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.234 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.234 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.234 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.234 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.234 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.234 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.234 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.234 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.234 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.234 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.234 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.234 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.234 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.234 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.234 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.234 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.234 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.234 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.234 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.234 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.234 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.234 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.234 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.234 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.234 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.234 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.234 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.234 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.234 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.234 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.234 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.234 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.234 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.234 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.234 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.234 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.234 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.234 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.234 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.234 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.234 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.234 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.234 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.234 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.234 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.234 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.234 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.234 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.234 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.234 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.234 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.234 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.234 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.234 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.234 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.234 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.234 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.234 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:58.234 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:58.234 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:58.234 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:58.234 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:58.234 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:58.234 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:58.234 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:58.234 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:58.234 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:58.234 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:58.234 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:58.234 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:58.234 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.234 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.234 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 6815344 kB' 'MemAvailable: 9503776 kB' 'Buffers: 2436 kB' 'Cached: 2892660 kB' 'SwapCached: 0 kB' 'Active: 462208 kB' 'Inactive: 2552596 kB' 'Active(anon): 130184 kB' 'Inactive(anon): 0 kB' 'Active(file): 332024 kB' 'Inactive(file): 2552596 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'AnonPages: 121280 kB' 'Mapped: 48696 kB' 'Shmem: 10476 kB' 'KReclaimable: 81504 kB' 'Slab: 166500 kB' 'SReclaimable: 81504 kB' 'SUnreclaim: 84996 kB' 'KernelStack: 6544 kB' 'PageTables: 4076 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 348392 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55252 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 178028 kB' 'DirectMap2M: 6113280 kB' 'DirectMap1G: 8388608 kB' 00:04:58.234 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.235 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.235 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.235 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.235 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.235 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.497 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.497 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.497 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.497 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.497 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.497 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.497 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.497 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.497 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.497 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.497 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.497 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.497 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.497 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.497 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.497 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.497 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.497 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.497 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.497 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.497 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.497 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.497 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.497 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.497 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.497 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.497 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.497 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.497 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.498 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.498 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.498 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.498 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.498 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.498 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.498 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.498 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.498 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.498 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.498 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.498 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.498 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.498 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.498 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.498 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.498 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.498 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.498 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.498 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.498 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.498 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.498 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.498 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.498 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.498 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.498 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.498 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.498 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.498 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.498 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.498 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.498 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.498 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.498 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.498 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.498 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.498 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.498 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.498 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.498 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.498 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.498 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.498 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.498 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.498 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.498 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.498 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.498 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.498 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.498 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.498 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.498 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.498 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.498 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.498 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.498 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.498 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.498 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.498 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.498 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.498 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.498 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.498 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.498 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.498 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.498 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.498 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.498 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.498 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.498 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.498 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.498 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.498 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.498 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.498 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.498 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.498 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.498 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.498 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.498 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.498 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.498 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.498 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.498 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.498 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.498 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.498 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.498 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.498 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.498 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.498 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.498 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.498 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.498 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.498 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.498 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.498 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.498 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.498 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.498 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.498 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.498 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.498 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.498 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.498 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.498 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.498 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.498 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.498 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.498 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.498 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.498 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.498 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.498 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.498 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.498 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.498 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.498 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.498 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.499 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.499 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.499 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.499 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.499 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.499 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.499 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.499 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.499 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.499 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.499 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.499 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.499 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.499 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.499 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.499 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.499 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.499 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.499 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.499 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.499 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.499 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.499 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.499 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.499 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.499 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.499 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.499 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.499 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.499 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.499 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.499 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.499 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.499 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.499 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.499 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.499 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.499 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.499 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.499 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.499 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.499 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.499 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.499 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.499 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.499 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.499 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.499 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.499 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.499 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.499 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:58.499 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:58.499 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:58.499 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:58.499 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:58.499 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:58.499 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:58.499 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:58.499 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:58.499 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:58.499 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:58.499 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:58.499 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:58.499 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.499 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.499 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 6815344 kB' 'MemAvailable: 9503776 kB' 'Buffers: 2436 kB' 'Cached: 2892660 kB' 'SwapCached: 0 kB' 'Active: 461756 kB' 'Inactive: 2552596 kB' 'Active(anon): 129732 kB' 'Inactive(anon): 0 kB' 'Active(file): 332024 kB' 'Inactive(file): 2552596 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'AnonPages: 120860 kB' 'Mapped: 48696 kB' 'Shmem: 10476 kB' 'KReclaimable: 81504 kB' 'Slab: 166500 kB' 'SReclaimable: 81504 kB' 'SUnreclaim: 84996 kB' 'KernelStack: 6560 kB' 'PageTables: 4132 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 348392 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55268 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 178028 kB' 'DirectMap2M: 6113280 kB' 'DirectMap1G: 8388608 kB' 00:04:58.499 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.499 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.499 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.499 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.499 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.499 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.499 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.499 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.499 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.499 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.499 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.499 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.499 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.499 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.499 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.499 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.499 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.499 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.499 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.499 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.499 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.499 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.499 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.499 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.499 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.499 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.499 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.499 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.499 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.499 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.499 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.499 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.499 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.499 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.499 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.499 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.499 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.499 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.499 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.499 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.500 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.500 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.500 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.500 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.500 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.500 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.500 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.500 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.500 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.500 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.500 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.500 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.500 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.500 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.500 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.500 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.500 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.500 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.500 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.500 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.500 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.500 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.500 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.500 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.500 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.500 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.500 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.500 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.500 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.500 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.500 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.500 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.500 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.500 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.500 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.500 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.500 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.500 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.500 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.500 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.500 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.500 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.500 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.500 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.500 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.500 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.500 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.500 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.500 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.500 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.500 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.500 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.500 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.500 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.500 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.500 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.500 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.500 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.500 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.500 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.500 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.500 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.500 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.500 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.500 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.500 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.500 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.500 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.500 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.500 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.500 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.500 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.500 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.500 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.500 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.500 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.500 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.500 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.500 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.500 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.500 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.500 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.500 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.500 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.500 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.500 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.500 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.500 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.500 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.500 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.500 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.500 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.500 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.500 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.500 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.500 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.500 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.500 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.500 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.500 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.500 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.500 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.500 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.500 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.500 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.500 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.500 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.500 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.500 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.500 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.500 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.500 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.500 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.500 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.500 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.500 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.500 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.500 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.500 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.500 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.501 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.501 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.501 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.501 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.501 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.501 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.501 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.501 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.501 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.501 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.501 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.501 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.501 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.501 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.501 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.501 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.501 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.501 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.501 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.501 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.501 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.501 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.501 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.501 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.501 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.501 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.501 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.501 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.501 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.501 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.501 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.501 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.501 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.501 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.501 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.501 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.501 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.501 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.501 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.501 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.501 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.501 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:58.501 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:58.501 nr_hugepages=1024 00:04:58.501 resv_hugepages=0 00:04:58.501 surplus_hugepages=0 00:04:58.501 anon_hugepages=0 00:04:58.501 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:58.501 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:58.501 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:58.501 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:58.501 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:58.501 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:58.501 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:58.501 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:58.501 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:58.501 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:58.501 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:58.501 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:58.501 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:58.501 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:58.501 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:58.501 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:58.501 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:58.501 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.501 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.501 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 6815344 kB' 'MemAvailable: 9503776 kB' 'Buffers: 2436 kB' 'Cached: 2892660 kB' 'SwapCached: 0 kB' 'Active: 462016 kB' 'Inactive: 2552596 kB' 'Active(anon): 129992 kB' 'Inactive(anon): 0 kB' 'Active(file): 332024 kB' 'Inactive(file): 2552596 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'AnonPages: 121120 kB' 'Mapped: 48696 kB' 'Shmem: 10476 kB' 'KReclaimable: 81504 kB' 'Slab: 166500 kB' 'SReclaimable: 81504 kB' 'SUnreclaim: 84996 kB' 'KernelStack: 6560 kB' 'PageTables: 4132 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 348392 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55284 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 178028 kB' 'DirectMap2M: 6113280 kB' 'DirectMap1G: 8388608 kB' 00:04:58.501 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.501 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.501 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.501 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.501 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.501 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.501 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.501 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.501 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.501 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.501 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.501 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.501 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.501 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.501 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.501 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.501 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.501 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.501 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.501 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.501 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.501 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.501 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.501 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.501 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.501 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.501 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.501 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.501 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.501 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.501 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.501 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.501 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.501 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.501 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.501 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.501 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.501 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.501 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.501 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.501 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.501 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.502 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.502 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.502 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.502 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.502 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.502 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.502 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.502 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.502 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.502 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.502 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.502 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.502 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.502 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.502 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.502 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.502 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.502 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.502 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.502 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.502 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.502 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.502 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.502 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.502 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.502 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.502 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.502 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.502 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.502 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.502 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.502 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.502 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.502 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.502 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.502 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.502 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.502 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.502 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.502 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.502 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.502 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.502 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.502 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.502 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.502 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.502 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.502 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.502 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.502 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.502 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.502 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.502 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.502 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.502 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.502 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.502 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.502 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.502 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.502 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.502 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.502 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.502 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.502 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.502 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.502 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.502 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.502 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.502 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.502 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.502 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.502 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.502 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.502 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.502 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.502 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.502 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.502 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.502 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.502 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.502 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.502 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.502 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.502 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.502 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.502 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.502 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.502 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.502 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.502 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.502 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.502 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.502 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.502 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.502 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.502 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.502 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.502 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.502 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.502 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.503 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.503 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.503 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.503 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.503 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.503 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.503 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.503 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.503 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.503 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.503 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.503 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.503 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.503 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.503 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.503 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.503 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.503 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.503 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.503 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.503 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.503 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.503 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.503 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.503 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.503 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.503 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.503 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.503 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.503 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.503 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.503 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.503 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.503 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.503 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.503 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.503 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.503 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.503 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.503 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.503 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.503 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.503 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.503 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.503 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.503 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.503 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.503 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.503 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.503 00:09:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.503 00:09:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.503 00:09:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:04:58.503 00:09:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:58.503 00:09:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:58.503 00:09:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:58.503 00:09:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:04:58.503 00:09:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:58.503 00:09:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:58.503 00:09:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:58.503 00:09:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:58.503 00:09:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:58.503 00:09:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:58.503 00:09:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:58.503 00:09:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:58.503 00:09:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:04:58.503 00:09:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:58.503 00:09:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:58.503 00:09:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:58.503 00:09:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:58.503 00:09:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:58.503 00:09:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:58.503 00:09:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:58.503 00:09:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.503 00:09:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.503 00:09:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 6815344 kB' 'MemUsed: 5426636 kB' 'SwapCached: 0 kB' 'Active: 461748 kB' 'Inactive: 2552596 kB' 'Active(anon): 129724 kB' 'Inactive(anon): 0 kB' 'Active(file): 332024 kB' 'Inactive(file): 2552596 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'FilePages: 2895096 kB' 'Mapped: 48696 kB' 'AnonPages: 121120 kB' 'Shmem: 10476 kB' 'KernelStack: 6560 kB' 'PageTables: 4132 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 81504 kB' 'Slab: 166500 kB' 'SReclaimable: 81504 kB' 'SUnreclaim: 84996 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:58.503 00:09:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.503 00:09:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.503 00:09:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.503 00:09:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.503 00:09:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.503 00:09:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.503 00:09:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.503 00:09:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.503 00:09:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.503 00:09:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.503 00:09:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.503 00:09:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.503 00:09:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.503 00:09:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.503 00:09:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.503 00:09:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.503 00:09:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.503 00:09:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.503 00:09:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.503 00:09:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.503 00:09:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.503 00:09:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.503 00:09:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.503 00:09:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.503 00:09:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.503 00:09:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.503 00:09:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.503 00:09:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.503 00:09:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.503 00:09:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.503 00:09:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.503 00:09:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.503 00:09:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.503 00:09:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.503 00:09:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.503 00:09:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.504 00:09:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.504 00:09:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.504 00:09:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.504 00:09:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.504 00:09:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.504 00:09:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.504 00:09:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.504 00:09:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.504 00:09:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.504 00:09:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.504 00:09:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.504 00:09:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.504 00:09:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.504 00:09:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.504 00:09:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.504 00:09:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.504 00:09:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.504 00:09:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.504 00:09:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.504 00:09:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.504 00:09:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.504 00:09:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.504 00:09:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.504 00:09:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.504 00:09:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.504 00:09:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.504 00:09:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.504 00:09:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.504 00:09:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.504 00:09:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.504 00:09:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.504 00:09:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.504 00:09:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.504 00:09:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.504 00:09:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.504 00:09:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.504 00:09:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.504 00:09:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.504 00:09:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.504 00:09:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.504 00:09:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.504 00:09:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.504 00:09:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.504 00:09:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.504 00:09:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.504 00:09:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.504 00:09:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.504 00:09:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.504 00:09:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.504 00:09:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.504 00:09:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.504 00:09:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.504 00:09:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.504 00:09:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.504 00:09:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.504 00:09:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.504 00:09:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.504 00:09:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.504 00:09:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.504 00:09:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.504 00:09:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.504 00:09:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.504 00:09:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.504 00:09:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.504 00:09:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.504 00:09:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.504 00:09:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.504 00:09:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.504 00:09:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.504 00:09:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.504 00:09:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.504 00:09:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.504 00:09:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.504 00:09:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.504 00:09:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.504 00:09:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.504 00:09:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.504 00:09:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.504 00:09:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.504 00:09:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.504 00:09:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.504 00:09:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.504 00:09:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.504 00:09:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.504 00:09:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.504 00:09:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.504 00:09:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.504 00:09:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.504 00:09:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.504 00:09:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.504 00:09:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.504 00:09:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.504 00:09:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.504 00:09:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.504 00:09:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.504 00:09:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.504 00:09:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.504 00:09:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.504 00:09:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.504 00:09:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.504 00:09:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.504 00:09:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.504 00:09:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.504 00:09:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.504 00:09:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.504 00:09:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.504 00:09:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.504 00:09:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.504 00:09:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.504 00:09:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:58.504 00:09:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:58.504 00:09:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:58.504 00:09:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:58.504 00:09:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:58.504 node0=1024 expecting 1024 00:04:58.504 00:09:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:58.504 00:09:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:58.504 00:09:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:58.504 00:04:58.504 real 0m1.000s 00:04:58.504 user 0m0.430s 00:04:58.504 sys 0m0.619s 00:04:58.504 00:09:13 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:58.504 ************************************ 00:04:58.505 END TEST even_2G_alloc 00:04:58.505 ************************************ 00:04:58.505 00:09:13 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:58.505 00:09:13 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:04:58.505 00:09:13 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:58.505 00:09:13 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:58.505 00:09:13 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:58.505 ************************************ 00:04:58.505 START TEST odd_alloc 00:04:58.505 ************************************ 00:04:58.505 00:09:13 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1121 -- # odd_alloc 00:04:58.505 00:09:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:04:58.505 00:09:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:04:58.505 00:09:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:58.505 00:09:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:58.505 00:09:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:04:58.505 00:09:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:58.505 00:09:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:58.505 00:09:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:58.505 00:09:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:04:58.505 00:09:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:58.505 00:09:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:58.505 00:09:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:58.505 00:09:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:58.505 00:09:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:58.505 00:09:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:58.505 00:09:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1025 00:04:58.505 00:09:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:58.505 00:09:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:58.505 00:09:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:58.505 00:09:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:04:58.505 00:09:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:04:58.505 00:09:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:04:58.505 00:09:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:58.505 00:09:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:59.072 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:59.332 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:59.332 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:59.332 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:59.332 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:59.332 00:09:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:04:59.332 00:09:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:04:59.332 00:09:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:59.332 00:09:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:59.332 00:09:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:59.332 00:09:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:59.332 00:09:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:59.332 00:09:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:59.332 00:09:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:59.332 00:09:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:59.332 00:09:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:59.332 00:09:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:59.332 00:09:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:59.332 00:09:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:59.332 00:09:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:59.332 00:09:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:59.332 00:09:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:59.332 00:09:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:59.332 00:09:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.332 00:09:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.332 00:09:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 6809536 kB' 'MemAvailable: 9497976 kB' 'Buffers: 2436 kB' 'Cached: 2892668 kB' 'SwapCached: 0 kB' 'Active: 462148 kB' 'Inactive: 2552604 kB' 'Active(anon): 130124 kB' 'Inactive(anon): 0 kB' 'Active(file): 332024 kB' 'Inactive(file): 2552604 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 232 kB' 'Writeback: 0 kB' 'AnonPages: 121292 kB' 'Mapped: 48944 kB' 'Shmem: 10476 kB' 'KReclaimable: 81504 kB' 'Slab: 166500 kB' 'SReclaimable: 81504 kB' 'SUnreclaim: 84996 kB' 'KernelStack: 6568 kB' 'PageTables: 4044 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459992 kB' 'Committed_AS: 348392 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55300 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 178028 kB' 'DirectMap2M: 6113280 kB' 'DirectMap1G: 8388608 kB' 00:04:59.332 00:09:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.332 00:09:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.332 00:09:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.332 00:09:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.332 00:09:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.332 00:09:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.332 00:09:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.332 00:09:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.332 00:09:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.332 00:09:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.332 00:09:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.332 00:09:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.332 00:09:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.332 00:09:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.332 00:09:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.332 00:09:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.332 00:09:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.332 00:09:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.332 00:09:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.332 00:09:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.332 00:09:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.332 00:09:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.332 00:09:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.332 00:09:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.332 00:09:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.332 00:09:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.332 00:09:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.332 00:09:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.332 00:09:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.332 00:09:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.332 00:09:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.332 00:09:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.332 00:09:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.332 00:09:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.332 00:09:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.332 00:09:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.332 00:09:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.332 00:09:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.332 00:09:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.332 00:09:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.332 00:09:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.332 00:09:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.332 00:09:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.332 00:09:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.332 00:09:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.332 00:09:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.332 00:09:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.332 00:09:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.332 00:09:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.332 00:09:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.332 00:09:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.332 00:09:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.332 00:09:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.333 00:09:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.333 00:09:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.333 00:09:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.333 00:09:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.333 00:09:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.333 00:09:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.333 00:09:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.333 00:09:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.333 00:09:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.333 00:09:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.333 00:09:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.333 00:09:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.333 00:09:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.333 00:09:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.333 00:09:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.333 00:09:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.333 00:09:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.333 00:09:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.333 00:09:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.333 00:09:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.333 00:09:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.333 00:09:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.333 00:09:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.333 00:09:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.333 00:09:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.333 00:09:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.333 00:09:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.333 00:09:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.333 00:09:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.333 00:09:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.333 00:09:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.333 00:09:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.333 00:09:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.333 00:09:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.333 00:09:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.333 00:09:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.333 00:09:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.333 00:09:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.333 00:09:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.333 00:09:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.333 00:09:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.333 00:09:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.333 00:09:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.333 00:09:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.333 00:09:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.333 00:09:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.333 00:09:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.333 00:09:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.333 00:09:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.333 00:09:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.333 00:09:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.333 00:09:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.333 00:09:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.333 00:09:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.333 00:09:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.333 00:09:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.333 00:09:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.333 00:09:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.333 00:09:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.333 00:09:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.333 00:09:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.333 00:09:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.333 00:09:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.333 00:09:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.333 00:09:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.333 00:09:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.333 00:09:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.333 00:09:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.333 00:09:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.333 00:09:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.333 00:09:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.333 00:09:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.333 00:09:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.333 00:09:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.333 00:09:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.333 00:09:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.333 00:09:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.333 00:09:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.333 00:09:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.333 00:09:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.333 00:09:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.333 00:09:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.333 00:09:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.333 00:09:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.333 00:09:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.333 00:09:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.333 00:09:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.333 00:09:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.333 00:09:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.333 00:09:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.333 00:09:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.333 00:09:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.333 00:09:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.333 00:09:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.333 00:09:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.333 00:09:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.333 00:09:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.333 00:09:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.333 00:09:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.333 00:09:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.333 00:09:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.333 00:09:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.333 00:09:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.333 00:09:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.333 00:09:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.333 00:09:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.333 00:09:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.333 00:09:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.333 00:09:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:59.333 00:09:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:59.333 00:09:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:59.333 00:09:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:59.333 00:09:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:59.333 00:09:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:59.333 00:09:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:59.333 00:09:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:59.333 00:09:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:59.333 00:09:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:59.333 00:09:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:59.333 00:09:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:59.334 00:09:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:59.334 00:09:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 6809536 kB' 'MemAvailable: 9497976 kB' 'Buffers: 2436 kB' 'Cached: 2892668 kB' 'SwapCached: 0 kB' 'Active: 461640 kB' 'Inactive: 2552604 kB' 'Active(anon): 129616 kB' 'Inactive(anon): 0 kB' 'Active(file): 332024 kB' 'Inactive(file): 2552604 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 232 kB' 'Writeback: 0 kB' 'AnonPages: 120992 kB' 'Mapped: 48812 kB' 'Shmem: 10476 kB' 'KReclaimable: 81504 kB' 'Slab: 166524 kB' 'SReclaimable: 81504 kB' 'SUnreclaim: 85020 kB' 'KernelStack: 6528 kB' 'PageTables: 4036 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459992 kB' 'Committed_AS: 348392 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55268 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 178028 kB' 'DirectMap2M: 6113280 kB' 'DirectMap1G: 8388608 kB' 00:04:59.334 00:09:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.334 00:09:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.334 00:09:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.334 00:09:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.334 00:09:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.334 00:09:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.334 00:09:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.334 00:09:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.334 00:09:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.334 00:09:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.334 00:09:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.334 00:09:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.334 00:09:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.334 00:09:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.334 00:09:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.334 00:09:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.334 00:09:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.334 00:09:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.334 00:09:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.334 00:09:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.334 00:09:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.334 00:09:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.334 00:09:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.334 00:09:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.334 00:09:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.334 00:09:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.334 00:09:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.334 00:09:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.334 00:09:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.334 00:09:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.334 00:09:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.334 00:09:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.334 00:09:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.334 00:09:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.334 00:09:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.334 00:09:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.334 00:09:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.334 00:09:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.334 00:09:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.334 00:09:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.334 00:09:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.334 00:09:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.334 00:09:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.334 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.334 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.334 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.334 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.334 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.334 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.334 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.334 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.334 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.334 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.334 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.334 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.334 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.334 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.334 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.334 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.334 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.334 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.334 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.334 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.334 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.334 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.334 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.334 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.334 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.334 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.334 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.334 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.334 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.334 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.334 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.334 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.334 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.334 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.334 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.334 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.334 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.334 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.334 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.334 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.334 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.334 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.334 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.334 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.334 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.334 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.334 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.334 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.334 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.334 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.334 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.334 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.334 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.334 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.334 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.334 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.334 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.334 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.334 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.334 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.334 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.334 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.334 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.335 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.335 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.335 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.335 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.335 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.335 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.596 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.596 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.596 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.596 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.596 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.596 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.596 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.596 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.596 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.596 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.596 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.596 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.596 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.596 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.596 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.596 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.596 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.596 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.596 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.596 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.596 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.596 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.596 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.596 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.596 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.596 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.596 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.596 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.596 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.596 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.596 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.596 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.596 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.596 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.596 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.596 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.596 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.596 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.596 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.596 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.596 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.596 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.596 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.596 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.596 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.596 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.596 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.596 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.596 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.596 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.596 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.596 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.596 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.596 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.596 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.597 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.597 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.597 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.597 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.597 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.597 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.597 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.597 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.597 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.597 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.597 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.597 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.597 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.597 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.597 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.597 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.597 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.597 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.597 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.597 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.597 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.597 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.597 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.597 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.597 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.597 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.597 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.597 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.597 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.597 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.597 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.597 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.597 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.597 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.597 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.597 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.597 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.597 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.597 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.597 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.597 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:59.597 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:59.597 00:09:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:59.597 00:09:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:59.597 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:59.597 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:59.597 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:59.597 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:59.597 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:59.597 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:59.597 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:59.597 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:59.597 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:59.597 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.597 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 6809536 kB' 'MemAvailable: 9497976 kB' 'Buffers: 2436 kB' 'Cached: 2892668 kB' 'SwapCached: 0 kB' 'Active: 461840 kB' 'Inactive: 2552604 kB' 'Active(anon): 129816 kB' 'Inactive(anon): 0 kB' 'Active(file): 332024 kB' 'Inactive(file): 2552604 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 232 kB' 'Writeback: 0 kB' 'AnonPages: 121192 kB' 'Mapped: 48812 kB' 'Shmem: 10476 kB' 'KReclaimable: 81504 kB' 'Slab: 166520 kB' 'SReclaimable: 81504 kB' 'SUnreclaim: 85016 kB' 'KernelStack: 6528 kB' 'PageTables: 4036 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459992 kB' 'Committed_AS: 348392 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55284 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 178028 kB' 'DirectMap2M: 6113280 kB' 'DirectMap1G: 8388608 kB' 00:04:59.597 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.597 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.597 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.597 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.597 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.597 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.597 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.597 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.597 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.597 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.597 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.597 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.597 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.597 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.597 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.597 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.597 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.597 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.597 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.597 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.597 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.597 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.597 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.597 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.597 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.597 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.597 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.597 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.597 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.597 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.597 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.597 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.597 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.597 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.597 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.597 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.597 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.597 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.597 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.597 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.597 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.597 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.597 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.597 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.597 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.597 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.597 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.597 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.597 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.597 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.597 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.597 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.597 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.598 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.598 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.598 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.598 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.598 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.598 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.598 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.598 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.598 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.598 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.598 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.598 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.598 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.598 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.598 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.598 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.598 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.598 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.598 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.598 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.598 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.598 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.598 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.598 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.598 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.598 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.598 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.598 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.598 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.598 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.598 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.598 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.598 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.598 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.598 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.598 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.598 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.598 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.598 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.598 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.598 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.598 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.598 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.598 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.598 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.598 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.598 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.598 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.598 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.598 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.598 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.598 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.598 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.598 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.598 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.598 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.598 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.598 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.598 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.598 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.598 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.598 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.598 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.598 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.598 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.598 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.598 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.598 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.598 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.598 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.598 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.598 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.598 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.598 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.598 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.598 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.598 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.598 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.598 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.598 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.598 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.598 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.598 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.598 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.598 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.598 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.598 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.598 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.598 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.598 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.598 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.598 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.598 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.598 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.598 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.598 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.598 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.598 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.598 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.598 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.598 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.598 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.598 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.598 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.598 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.598 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.598 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.598 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.598 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.598 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.598 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.598 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.598 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.598 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.598 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.598 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.598 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.598 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.598 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.598 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.598 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.598 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.598 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.599 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.599 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.599 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.599 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.599 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.599 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.599 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.599 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.599 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.599 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.599 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.599 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.599 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.599 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.599 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.599 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.599 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.599 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.599 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.599 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.599 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.599 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.599 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.599 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.599 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.599 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.599 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:59.599 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:59.599 nr_hugepages=1025 00:04:59.599 resv_hugepages=0 00:04:59.599 surplus_hugepages=0 00:04:59.599 00:09:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:59.599 00:09:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:04:59.599 00:09:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:59.599 00:09:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:59.599 anon_hugepages=0 00:04:59.599 00:09:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:59.599 00:09:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:59.599 00:09:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:04:59.599 00:09:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:59.599 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:59.599 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:59.599 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:59.599 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:59.599 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:59.599 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:59.599 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:59.599 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:59.599 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:59.599 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.599 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.599 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 6809904 kB' 'MemAvailable: 9498344 kB' 'Buffers: 2436 kB' 'Cached: 2892668 kB' 'SwapCached: 0 kB' 'Active: 461948 kB' 'Inactive: 2552604 kB' 'Active(anon): 129924 kB' 'Inactive(anon): 0 kB' 'Active(file): 332024 kB' 'Inactive(file): 2552604 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 232 kB' 'Writeback: 0 kB' 'AnonPages: 121104 kB' 'Mapped: 48812 kB' 'Shmem: 10476 kB' 'KReclaimable: 81504 kB' 'Slab: 166512 kB' 'SReclaimable: 81504 kB' 'SUnreclaim: 85008 kB' 'KernelStack: 6544 kB' 'PageTables: 4096 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459992 kB' 'Committed_AS: 348392 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55284 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 178028 kB' 'DirectMap2M: 6113280 kB' 'DirectMap1G: 8388608 kB' 00:04:59.599 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.599 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.599 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.599 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.599 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.599 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.599 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.599 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.599 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.599 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.599 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.599 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.599 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.599 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.599 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.599 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.599 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.599 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.599 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.599 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.599 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.599 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.599 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.599 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.599 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.599 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.599 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.599 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.599 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.599 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.599 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.599 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.599 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.599 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.599 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.599 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.599 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.599 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.599 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.599 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.599 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.599 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.599 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.599 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.599 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.599 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.599 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.599 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.599 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.599 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.599 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.599 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.599 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.599 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.599 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.599 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.599 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.599 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.599 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.599 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.600 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.600 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.600 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.600 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.600 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.600 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.600 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.600 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.600 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.600 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.600 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.600 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.600 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.600 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.600 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.600 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.600 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.600 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.600 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.600 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.600 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.600 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.600 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.600 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.600 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.600 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.600 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.600 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.600 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.600 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.600 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.600 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.600 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.600 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.600 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.600 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.600 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.600 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.600 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.600 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.600 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.600 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.600 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.600 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.600 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.600 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.600 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.600 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.600 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.600 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.600 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.600 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.600 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.600 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.600 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.600 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.600 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.600 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.600 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.600 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.600 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.600 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.600 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.600 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.600 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.600 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.600 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.600 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.600 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.600 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.600 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.600 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.600 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.600 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.600 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.600 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.600 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.600 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.600 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.600 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.600 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.600 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.600 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.600 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.600 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.600 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.600 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.600 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.600 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.600 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.600 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.600 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.600 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.600 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.600 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.600 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.600 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.600 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.600 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.600 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.600 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.600 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.600 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.600 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.600 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.600 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.600 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.600 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.600 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.600 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.600 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.600 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.600 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.600 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.600 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.600 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.600 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.600 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.600 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.600 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.600 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.600 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.600 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.601 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.601 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.601 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.601 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.601 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.601 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.601 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.601 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.601 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.601 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.601 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:04:59.601 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:59.601 00:09:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:59.601 00:09:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:59.601 00:09:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:04:59.601 00:09:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:59.601 00:09:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1025 00:04:59.601 00:09:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:59.601 00:09:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:59.601 00:09:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:59.601 00:09:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:59.601 00:09:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:59.601 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:59.601 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:04:59.601 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:59.601 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:59.601 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:59.601 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:59.601 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:59.601 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:59.601 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:59.601 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 6809912 kB' 'MemUsed: 5432068 kB' 'SwapCached: 0 kB' 'Active: 461956 kB' 'Inactive: 2552604 kB' 'Active(anon): 129932 kB' 'Inactive(anon): 0 kB' 'Active(file): 332024 kB' 'Inactive(file): 2552604 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 232 kB' 'Writeback: 0 kB' 'FilePages: 2895104 kB' 'Mapped: 48812 kB' 'AnonPages: 121088 kB' 'Shmem: 10476 kB' 'KernelStack: 6544 kB' 'PageTables: 4092 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 81504 kB' 'Slab: 166504 kB' 'SReclaimable: 81504 kB' 'SUnreclaim: 85000 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Surp: 0' 00:04:59.601 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.601 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.601 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.601 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.601 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.601 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.601 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.601 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.601 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.601 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.601 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.601 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.601 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.601 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.601 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.601 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.601 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.601 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.601 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.601 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.601 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.601 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.601 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.601 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.601 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.601 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.601 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.601 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.601 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.601 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.601 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.601 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.601 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.601 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.601 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.601 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.601 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.601 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.601 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.601 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.601 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.601 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.601 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.601 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.601 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.601 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.601 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.601 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.601 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.601 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.601 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.601 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.601 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.601 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.602 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.602 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.602 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.602 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.602 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.602 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.602 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.602 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.602 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.602 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.602 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.602 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.602 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.602 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.602 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.602 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.602 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.602 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.602 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.602 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.602 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.602 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.602 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.602 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.602 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.602 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.602 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.602 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.602 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.602 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.602 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.602 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.602 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.602 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.602 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.602 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.602 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.602 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.602 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.602 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.602 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.602 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.602 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.602 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.602 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.602 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.602 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.602 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.602 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.602 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.602 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.602 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.602 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.602 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.602 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.602 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.602 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.602 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.602 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.602 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.602 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.602 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.602 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.602 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.602 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.602 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.602 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.602 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.602 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.602 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.602 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.602 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.602 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.602 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.602 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.602 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.602 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.602 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.602 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.602 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.602 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.602 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.602 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.602 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.602 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.602 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.602 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.602 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.602 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.602 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.602 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.602 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.602 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.602 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:59.602 00:09:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:59.602 00:09:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:59.602 00:09:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:59.602 00:09:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:59.602 00:09:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:59.602 node0=1025 expecting 1025 00:04:59.602 00:09:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1025 expecting 1025' 00:04:59.602 00:09:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 1025 == \1\0\2\5 ]] 00:04:59.602 00:04:59.602 real 0m1.042s 00:04:59.602 user 0m0.447s 00:04:59.602 sys 0m0.624s 00:04:59.602 ************************************ 00:04:59.602 00:09:14 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:59.602 00:09:14 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:59.602 END TEST odd_alloc 00:04:59.602 ************************************ 00:04:59.602 00:09:14 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:04:59.602 00:09:14 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:59.602 00:09:14 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:59.603 00:09:14 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:59.603 ************************************ 00:04:59.603 START TEST custom_alloc 00:04:59.603 ************************************ 00:04:59.603 00:09:14 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1121 -- # custom_alloc 00:04:59.603 00:09:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:04:59.603 00:09:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:04:59.603 00:09:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:04:59.603 00:09:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:04:59.603 00:09:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:04:59.603 00:09:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:04:59.603 00:09:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:04:59.603 00:09:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:59.603 00:09:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:59.603 00:09:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:59.603 00:09:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:59.603 00:09:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:59.603 00:09:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:59.603 00:09:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:59.603 00:09:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:59.603 00:09:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:59.603 00:09:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:59.603 00:09:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:59.603 00:09:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:59.603 00:09:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:59.603 00:09:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:59.603 00:09:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:59.603 00:09:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:59.603 00:09:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:59.603 00:09:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:04:59.603 00:09:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 1 > 1 )) 00:04:59.603 00:09:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:04:59.603 00:09:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:04:59.603 00:09:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:04:59.603 00:09:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:04:59.603 00:09:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:59.603 00:09:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:59.603 00:09:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:59.603 00:09:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:59.603 00:09:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:59.603 00:09:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:59.603 00:09:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:59.603 00:09:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:04:59.603 00:09:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:59.603 00:09:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:04:59.603 00:09:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:04:59.603 00:09:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512' 00:04:59.603 00:09:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:04:59.603 00:09:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:59.603 00:09:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:00.168 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:00.426 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:00.426 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:00.426 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:00.426 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:00.426 00:09:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=512 00:05:00.426 00:09:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:05:00.426 00:09:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:05:00.426 00:09:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:00.426 00:09:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:00.426 00:09:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:00.426 00:09:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:00.426 00:09:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:00.426 00:09:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:00.426 00:09:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:00.426 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:00.426 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:05:00.426 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:00.426 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:00.426 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:00.426 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:00.426 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:00.426 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:00.426 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:00.426 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.427 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7863504 kB' 'MemAvailable: 10551932 kB' 'Buffers: 2436 kB' 'Cached: 2892664 kB' 'SwapCached: 0 kB' 'Active: 458908 kB' 'Inactive: 2552600 kB' 'Active(anon): 126884 kB' 'Inactive(anon): 0 kB' 'Active(file): 332024 kB' 'Inactive(file): 2552600 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 240 kB' 'Writeback: 0 kB' 'AnonPages: 117740 kB' 'Mapped: 48200 kB' 'Shmem: 10476 kB' 'KReclaimable: 81484 kB' 'Slab: 166308 kB' 'SReclaimable: 81484 kB' 'SUnreclaim: 84824 kB' 'KernelStack: 6464 kB' 'PageTables: 3756 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 335912 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55204 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 178028 kB' 'DirectMap2M: 6113280 kB' 'DirectMap1G: 8388608 kB' 00:05:00.427 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.427 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.427 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.427 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.427 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.427 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.427 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.427 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.427 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.427 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.427 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.427 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.427 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.427 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.427 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.427 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.427 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.427 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.427 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.427 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.427 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.427 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.427 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.427 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.427 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.427 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.427 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.427 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.427 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.427 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.427 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.427 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.427 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.427 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.427 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.427 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.427 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.427 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.427 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.427 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.427 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.427 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.427 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.427 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.427 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.427 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.427 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.427 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.427 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.427 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.427 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.427 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.427 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.427 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.427 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.427 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.427 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.427 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.427 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.427 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.427 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.427 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.427 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.427 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.427 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.427 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.427 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.427 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.427 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.427 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.427 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.427 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.427 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.427 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.427 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.427 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.427 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.427 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.427 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.427 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.427 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.427 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.427 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.427 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.427 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.427 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.427 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.427 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.427 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.427 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.427 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.427 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.427 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.427 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.427 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.427 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.427 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.427 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.427 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.427 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.427 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.427 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.427 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.427 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.427 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.427 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.427 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.427 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.427 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.427 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.427 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.427 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.427 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.427 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.427 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.427 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.427 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.427 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.428 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.428 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.428 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.428 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.428 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.428 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.428 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.428 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.428 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.428 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.428 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.428 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.428 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.428 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.428 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.428 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.428 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.428 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.428 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.428 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.428 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.428 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.428 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.428 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.428 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.428 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.428 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.428 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.428 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.428 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.428 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.428 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.428 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.428 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.428 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.428 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.428 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.428 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.428 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.428 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.428 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.428 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.428 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.428 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.428 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:05:00.428 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:00.428 00:09:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:00.428 00:09:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:00.428 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:00.428 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:05:00.428 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:00.428 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:00.428 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:00.428 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:00.428 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:00.428 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:00.428 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:00.428 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.428 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.428 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7863756 kB' 'MemAvailable: 10552184 kB' 'Buffers: 2436 kB' 'Cached: 2892664 kB' 'SwapCached: 0 kB' 'Active: 458400 kB' 'Inactive: 2552600 kB' 'Active(anon): 126376 kB' 'Inactive(anon): 0 kB' 'Active(file): 332024 kB' 'Inactive(file): 2552600 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 240 kB' 'Writeback: 0 kB' 'AnonPages: 117476 kB' 'Mapped: 48072 kB' 'Shmem: 10476 kB' 'KReclaimable: 81484 kB' 'Slab: 166300 kB' 'SReclaimable: 81484 kB' 'SUnreclaim: 84816 kB' 'KernelStack: 6448 kB' 'PageTables: 3700 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 335912 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55188 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 178028 kB' 'DirectMap2M: 6113280 kB' 'DirectMap1G: 8388608 kB' 00:05:00.428 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.428 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.428 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.428 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.428 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.428 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.428 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.428 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.428 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.428 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.428 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.428 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.428 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.428 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.428 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.428 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.428 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.428 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.428 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.428 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.428 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.428 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.428 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.428 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.428 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.428 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.428 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.428 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.428 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.428 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.428 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.428 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.428 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.428 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.428 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.428 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.428 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.428 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.428 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.428 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.428 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.428 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.428 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.428 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.428 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.428 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.428 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.429 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.429 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.429 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.429 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.429 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.429 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.429 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.429 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.429 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.429 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.429 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.429 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.429 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.429 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.429 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.429 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.429 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.429 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.429 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.429 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.429 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.429 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.429 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.429 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.429 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.429 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.429 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.429 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.429 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.429 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.429 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.429 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.429 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.429 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.429 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.429 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.429 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.429 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.429 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.429 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.429 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.429 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.429 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.429 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.429 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.429 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.429 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.429 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.429 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.429 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.429 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.429 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.429 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.429 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.429 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.690 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.690 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.690 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.690 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.690 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.690 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.690 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.690 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.690 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.690 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.690 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.690 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.690 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.690 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.690 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.690 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.690 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.690 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.690 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.690 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.690 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.690 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.690 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.690 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.690 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.690 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.690 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.690 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.690 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.690 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.690 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.690 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.690 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.690 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.690 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.690 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.690 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.690 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.690 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.690 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.690 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.690 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.690 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.690 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.690 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.690 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.690 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.690 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.690 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.690 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.690 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.690 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.690 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.690 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.690 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.690 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.690 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.690 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.690 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.690 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.690 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.690 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.690 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.690 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.690 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.690 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.690 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.690 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.690 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.690 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.690 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.690 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.690 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.690 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.690 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.690 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.691 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.691 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.691 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.691 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.691 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.691 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.691 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.691 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.691 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.691 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.691 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.691 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.691 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.691 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.691 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.691 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.691 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.691 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.691 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.691 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.691 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.691 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.691 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.691 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.691 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.691 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.691 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.691 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:05:00.691 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:00.691 00:09:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:00.691 00:09:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:00.691 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:00.691 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:05:00.691 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:00.691 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:00.691 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:00.691 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:00.691 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:00.691 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:00.691 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:00.691 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.691 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.691 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7863756 kB' 'MemAvailable: 10552184 kB' 'Buffers: 2436 kB' 'Cached: 2892664 kB' 'SwapCached: 0 kB' 'Active: 458524 kB' 'Inactive: 2552600 kB' 'Active(anon): 126500 kB' 'Inactive(anon): 0 kB' 'Active(file): 332024 kB' 'Inactive(file): 2552600 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 240 kB' 'Writeback: 0 kB' 'AnonPages: 117824 kB' 'Mapped: 48072 kB' 'Shmem: 10476 kB' 'KReclaimable: 81484 kB' 'Slab: 166300 kB' 'SReclaimable: 81484 kB' 'SUnreclaim: 84816 kB' 'KernelStack: 6464 kB' 'PageTables: 3756 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 335912 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55204 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 178028 kB' 'DirectMap2M: 6113280 kB' 'DirectMap1G: 8388608 kB' 00:05:00.691 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.691 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.691 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.691 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.691 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.691 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.691 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.691 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.691 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.691 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.691 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.691 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.691 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.691 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.691 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.691 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.691 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.691 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.691 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.691 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.691 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.691 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.691 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.691 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.691 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.691 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.691 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.691 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.691 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.691 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.691 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.691 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.691 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.691 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.691 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.691 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.691 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.691 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.691 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.691 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.691 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.691 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.691 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.691 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.691 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.691 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.691 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.691 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.691 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.691 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.691 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.691 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.691 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.691 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.691 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.691 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.691 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.691 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.691 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.691 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.692 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.692 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.692 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.692 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.692 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.692 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.692 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.692 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.692 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.692 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.692 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.692 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.692 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.692 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.692 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.692 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.692 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.692 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.692 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.692 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.692 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.692 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.692 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.692 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.692 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.692 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.692 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.692 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.692 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.692 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.692 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.692 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.692 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.692 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.692 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.692 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.692 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.692 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.692 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.692 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.692 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.692 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.692 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.692 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.692 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.692 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.692 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.692 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.692 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.692 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.692 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.692 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.692 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.692 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.692 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.692 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.692 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.692 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.692 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.692 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.692 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.692 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.692 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.692 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.692 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.692 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.692 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.692 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.692 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.692 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.692 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.692 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.692 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.692 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.692 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.692 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.692 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.692 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.692 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.692 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.692 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.692 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.692 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.692 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.692 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.692 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.692 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.692 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.692 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.692 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.692 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.692 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.692 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.692 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.692 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.692 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.692 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.692 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.692 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.692 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.692 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.692 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.692 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.692 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.692 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.692 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.692 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.692 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.692 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.692 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.692 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.692 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.692 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.692 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.692 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.692 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.692 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.692 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.693 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.693 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.693 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.693 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.693 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.693 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.693 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.693 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.693 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.693 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.693 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.693 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.693 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.693 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.693 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.693 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.693 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.693 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.693 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.693 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.693 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.693 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.693 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.693 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:05:00.693 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:00.693 nr_hugepages=512 00:05:00.693 resv_hugepages=0 00:05:00.693 surplus_hugepages=0 00:05:00.693 00:09:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:00.693 00:09:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:05:00.693 00:09:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:00.693 00:09:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:00.693 anon_hugepages=0 00:05:00.693 00:09:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:00.693 00:09:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:05:00.693 00:09:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:05:00.693 00:09:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:00.693 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:00.693 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:05:00.693 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:00.693 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:00.693 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:00.693 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:00.693 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:00.693 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:00.693 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:00.693 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.693 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.693 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7863756 kB' 'MemAvailable: 10552184 kB' 'Buffers: 2436 kB' 'Cached: 2892664 kB' 'SwapCached: 0 kB' 'Active: 458448 kB' 'Inactive: 2552600 kB' 'Active(anon): 126424 kB' 'Inactive(anon): 0 kB' 'Active(file): 332024 kB' 'Inactive(file): 2552600 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 240 kB' 'Writeback: 0 kB' 'AnonPages: 117560 kB' 'Mapped: 48072 kB' 'Shmem: 10476 kB' 'KReclaimable: 81484 kB' 'Slab: 166292 kB' 'SReclaimable: 81484 kB' 'SUnreclaim: 84808 kB' 'KernelStack: 6464 kB' 'PageTables: 3756 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 335912 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55188 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 178028 kB' 'DirectMap2M: 6113280 kB' 'DirectMap1G: 8388608 kB' 00:05:00.693 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.693 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.693 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.693 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.693 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.693 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.693 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.693 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.693 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.693 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.693 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.693 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.693 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.693 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.693 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.693 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.693 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.693 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.693 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.693 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.693 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.693 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.693 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.693 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.693 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.693 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.693 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.693 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.693 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.693 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.693 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.693 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.693 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.693 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.693 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.693 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.693 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.693 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.693 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.693 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.693 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.693 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.693 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.693 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.693 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.693 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.693 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.693 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.693 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.693 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.693 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.693 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.693 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.693 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.693 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.693 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.693 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.693 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.693 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.694 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.694 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.694 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.694 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.694 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.694 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.694 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.694 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.694 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.694 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.694 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.694 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.694 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.694 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.694 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.694 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.694 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.694 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.694 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.694 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.694 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.694 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.694 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.694 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.694 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.694 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.694 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.694 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.694 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.694 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.694 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.694 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.694 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.694 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.694 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.694 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.694 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.694 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.694 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.694 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.694 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.694 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.694 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.694 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.694 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.694 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.694 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.694 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.694 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.694 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.694 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.694 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.694 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.694 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.694 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.694 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.694 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.694 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.694 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.694 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.694 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.694 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.694 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.694 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.694 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.694 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.694 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.694 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.694 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.694 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.694 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.694 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.694 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.694 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.694 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.694 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.694 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.694 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.694 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.694 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.694 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.694 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.694 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.694 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.694 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.694 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.694 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.694 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.694 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.694 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.694 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.694 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.694 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.694 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.694 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.694 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.694 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.694 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.694 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.694 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.694 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.694 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.694 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.694 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.694 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.694 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.694 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.694 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.694 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.694 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.694 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.694 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.694 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.694 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.694 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.694 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.694 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.694 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.694 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.694 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.695 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.695 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.695 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.695 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.695 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.695 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.695 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.695 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.695 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.695 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.695 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.695 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.695 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.695 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.695 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 512 00:05:00.695 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:00.695 00:09:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:05:00.695 00:09:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:00.695 00:09:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:05:00.695 00:09:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:00.695 00:09:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:05:00.695 00:09:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:00.695 00:09:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:00.695 00:09:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:00.695 00:09:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:00.695 00:09:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:00.695 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:00.695 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:05:00.695 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:00.695 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:00.695 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:00.695 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:00.695 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:00.695 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:00.695 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:00.695 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7863764 kB' 'MemUsed: 4378216 kB' 'SwapCached: 0 kB' 'Active: 458444 kB' 'Inactive: 2552600 kB' 'Active(anon): 126420 kB' 'Inactive(anon): 0 kB' 'Active(file): 332024 kB' 'Inactive(file): 2552600 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 240 kB' 'Writeback: 0 kB' 'FilePages: 2895100 kB' 'Mapped: 48072 kB' 'AnonPages: 117768 kB' 'Shmem: 10476 kB' 'KernelStack: 6464 kB' 'PageTables: 3752 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 81484 kB' 'Slab: 166292 kB' 'SReclaimable: 81484 kB' 'SUnreclaim: 84808 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:05:00.695 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.695 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.695 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.695 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.695 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.695 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.695 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.695 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.695 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.695 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.695 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.695 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.695 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.695 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.695 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.695 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.695 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.695 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.695 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.695 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.695 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.695 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.695 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.695 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.695 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.695 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.695 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.695 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.695 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.695 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.695 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.695 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.695 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.695 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.695 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.695 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.695 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.695 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.695 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.695 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.695 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.695 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.695 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.695 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.695 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.695 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.695 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.695 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.695 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.695 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.695 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.695 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.695 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.696 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.696 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.696 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.696 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.696 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.696 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.696 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.696 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.696 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.696 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.696 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.696 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.696 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.696 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.696 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.696 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.696 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.696 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.696 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.696 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.696 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.696 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.696 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.696 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.696 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.696 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.696 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.696 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.696 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.696 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.696 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.696 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.696 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.696 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.696 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.696 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.696 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.696 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.696 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.696 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.696 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.696 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.696 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.696 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.696 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.696 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.696 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.696 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.696 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.696 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.696 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.696 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.696 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.696 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.696 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.696 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.696 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.696 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.696 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.696 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.696 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.696 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.696 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.696 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.696 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.696 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.696 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.696 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.696 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.696 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.696 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.696 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.696 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.696 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.696 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.696 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.696 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.696 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.696 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.696 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.696 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.696 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.696 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.696 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.696 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.696 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.696 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.696 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.696 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.696 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.696 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.696 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.696 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.696 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.696 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:05:00.696 00:09:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:00.696 00:09:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:00.696 00:09:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:00.696 00:09:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:00.696 node0=512 expecting 512 00:05:00.696 00:09:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:00.696 00:09:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:05:00.696 00:09:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:05:00.696 00:05:00.696 real 0m1.010s 00:05:00.696 user 0m0.438s 00:05:00.696 sys 0m0.598s 00:05:00.696 00:09:15 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:00.696 ************************************ 00:05:00.696 END TEST custom_alloc 00:05:00.696 ************************************ 00:05:00.696 00:09:15 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:05:00.696 00:09:15 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:05:00.696 00:09:15 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:00.696 00:09:15 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:00.696 00:09:15 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:00.696 ************************************ 00:05:00.696 START TEST no_shrink_alloc 00:05:00.696 ************************************ 00:05:00.696 00:09:15 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1121 -- # no_shrink_alloc 00:05:00.696 00:09:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:05:00.696 00:09:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:05:00.696 00:09:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:05:00.696 00:09:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:05:00.696 00:09:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:05:00.696 00:09:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:05:00.697 00:09:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:00.697 00:09:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:05:00.697 00:09:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:05:00.697 00:09:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:05:00.697 00:09:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:00.697 00:09:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:05:00.697 00:09:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:00.697 00:09:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:00.697 00:09:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:00.697 00:09:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:05:00.697 00:09:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:05:00.697 00:09:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:05:00.697 00:09:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:05:00.697 00:09:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:05:00.697 00:09:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:00.697 00:09:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:01.263 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:01.523 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:01.523 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:01.523 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:01.523 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:01.523 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:05:01.523 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:05:01.523 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:01.523 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:01.523 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:01.523 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:01.523 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:01.523 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:01.523 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:01.523 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:01.523 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:01.523 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:01.523 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:01.523 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:01.523 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:01.523 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:01.523 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:01.523 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:01.523 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.523 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.523 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 6816188 kB' 'MemAvailable: 9504616 kB' 'Buffers: 2436 kB' 'Cached: 2892664 kB' 'SwapCached: 0 kB' 'Active: 458936 kB' 'Inactive: 2552600 kB' 'Active(anon): 126912 kB' 'Inactive(anon): 0 kB' 'Active(file): 332024 kB' 'Inactive(file): 2552600 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 208 kB' 'Writeback: 0 kB' 'AnonPages: 117768 kB' 'Mapped: 48192 kB' 'Shmem: 10476 kB' 'KReclaimable: 81484 kB' 'Slab: 166288 kB' 'SReclaimable: 81484 kB' 'SUnreclaim: 84804 kB' 'KernelStack: 6496 kB' 'PageTables: 3840 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 335912 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55220 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 178028 kB' 'DirectMap2M: 6113280 kB' 'DirectMap1G: 8388608 kB' 00:05:01.523 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.523 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.523 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.523 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.523 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.523 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.523 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.523 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.523 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.523 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.523 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.523 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.523 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.523 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.523 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.523 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.523 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.523 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.523 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.523 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.523 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.523 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.523 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.523 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.523 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.523 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.523 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.523 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.523 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.523 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.523 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.523 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.523 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.523 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.523 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.523 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.523 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.523 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.523 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.523 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.523 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.523 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.523 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.523 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.523 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.523 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.523 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.523 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.523 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.523 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.523 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.523 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.523 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.523 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.523 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.523 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.523 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.523 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.523 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.524 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.524 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.524 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.524 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.524 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.524 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.524 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.524 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.524 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.524 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.524 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.524 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.524 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.524 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.524 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.524 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.524 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.524 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.524 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.524 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.524 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.524 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.524 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.524 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.524 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.524 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.524 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.524 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.524 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.524 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.524 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.524 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.524 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.524 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.524 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.524 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.524 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.524 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.524 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.524 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.524 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.524 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.524 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.524 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.524 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.524 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.524 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.524 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.524 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.524 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.524 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.524 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.524 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.524 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.524 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.524 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.524 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.524 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.524 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.524 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.524 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.524 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.524 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.524 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.524 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.524 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.524 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.524 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.524 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.524 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.524 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.524 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.524 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.524 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.524 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.524 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.524 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.524 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.524 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.524 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.524 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.524 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.524 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.524 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.524 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.524 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.524 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.524 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.524 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.524 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.524 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.524 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.524 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.524 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.524 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.524 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.524 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.524 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.524 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.524 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.524 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.524 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.524 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:01.525 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:01.525 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:01.525 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:01.525 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:01.525 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:01.525 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:01.525 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:01.525 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:01.525 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:01.525 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:01.525 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:01.525 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:01.525 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.525 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.525 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 6816188 kB' 'MemAvailable: 9504616 kB' 'Buffers: 2436 kB' 'Cached: 2892664 kB' 'SwapCached: 0 kB' 'Active: 458484 kB' 'Inactive: 2552600 kB' 'Active(anon): 126460 kB' 'Inactive(anon): 0 kB' 'Active(file): 332024 kB' 'Inactive(file): 2552600 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 208 kB' 'Writeback: 0 kB' 'AnonPages: 117600 kB' 'Mapped: 48076 kB' 'Shmem: 10476 kB' 'KReclaimable: 81484 kB' 'Slab: 166288 kB' 'SReclaimable: 81484 kB' 'SUnreclaim: 84804 kB' 'KernelStack: 6464 kB' 'PageTables: 3756 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 335912 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55188 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 178028 kB' 'DirectMap2M: 6113280 kB' 'DirectMap1G: 8388608 kB' 00:05:01.525 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.525 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.525 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.525 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.525 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.525 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.525 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.525 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.525 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.525 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.525 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.525 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.525 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.525 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.525 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.525 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.525 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.525 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.525 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.525 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.525 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.525 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.525 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.525 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.525 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.525 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.525 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.525 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.525 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.525 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.525 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.525 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.525 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.525 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.525 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.525 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.525 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.525 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.525 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.525 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.525 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.525 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.525 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.525 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.525 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.525 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.525 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.525 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.525 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.525 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.525 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.525 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.525 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.525 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.525 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.525 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.525 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.525 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.525 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.525 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.525 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.525 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.525 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.525 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.525 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.525 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.525 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.525 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.525 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.525 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.525 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.525 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.525 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.525 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.525 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.525 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.525 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.525 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.525 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.525 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.525 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.525 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.525 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.525 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.525 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.525 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.525 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.525 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.525 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.525 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.526 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.526 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.526 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.526 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.526 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.526 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.526 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.526 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.526 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.526 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.526 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.526 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.526 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.526 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.526 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.526 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.526 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.526 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.526 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.526 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.526 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.526 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.526 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.526 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.526 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.526 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.526 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.526 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.526 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.526 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.526 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.526 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.526 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.526 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.526 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.526 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.526 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.526 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.526 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.526 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.526 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.526 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.526 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.526 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.526 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.526 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.526 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.526 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.526 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.526 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.526 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.526 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.526 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.526 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.526 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.526 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.526 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.526 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.526 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.526 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.526 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.526 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.526 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.526 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.526 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.526 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.526 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.526 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.526 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.526 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.526 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.526 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.526 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.526 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.526 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.526 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.526 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.526 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.526 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.526 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.526 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.526 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.526 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.526 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.526 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.526 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.526 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.526 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.526 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.526 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.526 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.526 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.526 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.526 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.526 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.526 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.526 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.526 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.526 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.526 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.526 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.526 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.526 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.526 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.526 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.526 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.526 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.526 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.526 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.526 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.526 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.526 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.526 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.526 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.527 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.527 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:01.527 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:01.527 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:01.527 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:01.527 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:01.527 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:01.527 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:01.527 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:01.527 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:01.527 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:01.527 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:01.527 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:01.527 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:01.788 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.788 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.788 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 6816188 kB' 'MemAvailable: 9504616 kB' 'Buffers: 2436 kB' 'Cached: 2892664 kB' 'SwapCached: 0 kB' 'Active: 458744 kB' 'Inactive: 2552600 kB' 'Active(anon): 126720 kB' 'Inactive(anon): 0 kB' 'Active(file): 332024 kB' 'Inactive(file): 2552600 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 208 kB' 'Writeback: 0 kB' 'AnonPages: 117860 kB' 'Mapped: 48076 kB' 'Shmem: 10476 kB' 'KReclaimable: 81484 kB' 'Slab: 166288 kB' 'SReclaimable: 81484 kB' 'SUnreclaim: 84804 kB' 'KernelStack: 6464 kB' 'PageTables: 3756 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 335912 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55188 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 178028 kB' 'DirectMap2M: 6113280 kB' 'DirectMap1G: 8388608 kB' 00:05:01.788 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.788 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.788 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.788 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.788 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.788 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.788 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.788 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.788 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.788 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.788 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.788 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.788 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.788 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.788 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.788 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.788 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.788 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.788 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.788 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.788 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.788 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.788 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.788 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.788 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.788 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.788 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.788 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.788 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.788 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.788 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.788 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.788 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.788 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.788 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.788 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.788 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.788 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.788 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.788 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.788 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.788 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.788 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.788 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.788 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.788 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.788 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.788 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.788 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.788 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.788 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.788 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.788 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.788 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.789 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.789 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.789 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.789 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.789 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.789 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.789 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.789 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.789 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.789 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.789 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.789 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.789 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.789 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.789 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.789 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.789 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.789 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.789 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.789 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.789 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.789 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.789 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.789 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.789 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.789 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.789 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.789 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.789 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.789 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.789 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.789 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.789 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.789 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.789 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.789 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.789 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.789 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.789 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.789 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.789 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.789 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.789 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.789 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.789 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.789 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.789 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.789 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.789 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.789 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.789 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.789 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.789 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.789 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.789 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.789 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.789 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.789 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.789 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.789 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.789 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.789 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.789 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.789 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.789 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.789 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.789 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.789 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.789 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.789 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.789 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.789 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.789 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.789 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.789 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.789 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.789 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.789 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.789 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.789 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.789 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.789 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.789 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.789 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.789 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.789 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.789 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.789 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.789 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.789 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.789 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.789 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.789 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.789 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.789 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.789 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.789 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.789 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.789 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.789 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.789 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.789 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.790 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.790 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.790 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.790 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.790 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.790 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.790 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.790 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.790 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.790 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.790 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.790 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.790 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.790 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.790 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.790 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.790 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.790 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.790 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.790 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.790 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.790 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.790 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.790 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.790 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.790 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.790 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.790 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.790 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.790 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.790 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.790 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.790 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.790 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.790 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.790 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.790 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.790 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.790 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.790 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.790 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.790 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.790 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.790 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.790 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.790 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:01.790 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:01.790 nr_hugepages=1024 00:05:01.790 resv_hugepages=0 00:05:01.790 surplus_hugepages=0 00:05:01.790 anon_hugepages=0 00:05:01.790 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:01.790 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:01.790 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:01.790 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:01.790 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:01.790 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:01.790 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:01.790 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:01.790 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:01.790 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:01.790 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:01.790 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:01.790 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:01.790 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:01.790 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:01.790 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:01.790 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:01.790 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.790 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.790 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 6816188 kB' 'MemAvailable: 9504616 kB' 'Buffers: 2436 kB' 'Cached: 2892664 kB' 'SwapCached: 0 kB' 'Active: 458684 kB' 'Inactive: 2552600 kB' 'Active(anon): 126660 kB' 'Inactive(anon): 0 kB' 'Active(file): 332024 kB' 'Inactive(file): 2552600 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 208 kB' 'Writeback: 0 kB' 'AnonPages: 117752 kB' 'Mapped: 48076 kB' 'Shmem: 10476 kB' 'KReclaimable: 81484 kB' 'Slab: 166284 kB' 'SReclaimable: 81484 kB' 'SUnreclaim: 84800 kB' 'KernelStack: 6448 kB' 'PageTables: 3700 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 335912 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55204 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 178028 kB' 'DirectMap2M: 6113280 kB' 'DirectMap1G: 8388608 kB' 00:05:01.790 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.790 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.790 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.790 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.790 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.790 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.790 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.790 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.790 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.790 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.790 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.790 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.790 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.790 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.790 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.790 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.790 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.790 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.790 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.790 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.790 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.790 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.790 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.790 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.790 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.790 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.790 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.790 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.790 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.790 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.790 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.790 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.791 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.791 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.791 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.791 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.791 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.791 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.791 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.791 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.791 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.791 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.791 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.791 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.791 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.791 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.791 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.791 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.791 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.791 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.791 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.791 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.791 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.791 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.791 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.791 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.791 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.791 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.791 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.791 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.791 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.791 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.791 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.791 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.791 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.791 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.791 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.791 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.791 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.791 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.791 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.791 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.791 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.791 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.791 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.791 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.791 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.791 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.791 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.791 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.791 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.791 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.791 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.791 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.791 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.791 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.791 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.791 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.791 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.791 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.791 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.791 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.791 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.791 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.791 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.791 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.791 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.791 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.791 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.791 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.791 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.791 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.791 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.791 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.791 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.791 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.791 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.791 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.791 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.791 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.791 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.791 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.791 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.791 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.791 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.791 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.791 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.791 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.791 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.791 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.791 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.791 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.791 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.791 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.791 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.791 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.791 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.791 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.791 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.791 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.791 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.791 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.791 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.791 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.791 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.791 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.791 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.791 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.791 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.791 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.791 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.791 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.791 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.791 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.791 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.791 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.791 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.791 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.792 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.792 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.792 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.792 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.792 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.792 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.792 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.792 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.792 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.792 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.792 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.792 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.792 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.792 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.792 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.792 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.792 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.792 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.792 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.792 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.792 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.792 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.792 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.792 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.792 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.792 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.792 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.792 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.792 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.792 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.792 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.792 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.792 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.792 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.792 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.792 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.792 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.792 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.792 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.792 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.792 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.792 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.792 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.792 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.792 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.792 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:05:01.792 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:01.792 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:01.792 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:01.792 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:05:01.792 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:01.792 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:01.792 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:01.792 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:01.792 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:01.792 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:01.792 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:01.792 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:01.792 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:05:01.792 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:01.792 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:01.792 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:01.792 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:01.792 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:01.792 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:01.792 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:01.792 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 6816188 kB' 'MemUsed: 5425792 kB' 'SwapCached: 0 kB' 'Active: 458428 kB' 'Inactive: 2552600 kB' 'Active(anon): 126404 kB' 'Inactive(anon): 0 kB' 'Active(file): 332024 kB' 'Inactive(file): 2552600 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 208 kB' 'Writeback: 0 kB' 'FilePages: 2895100 kB' 'Mapped: 48076 kB' 'AnonPages: 117752 kB' 'Shmem: 10476 kB' 'KernelStack: 6448 kB' 'PageTables: 3700 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 81484 kB' 'Slab: 166284 kB' 'SReclaimable: 81484 kB' 'SUnreclaim: 84800 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:01.792 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.792 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.792 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.792 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.792 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.792 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.792 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.792 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.792 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.792 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.792 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.792 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.792 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.792 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.792 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.792 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.792 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.792 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.792 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.792 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.792 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.792 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.792 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.792 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.792 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.792 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.792 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.792 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.792 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.793 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.793 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.793 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.793 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.793 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.793 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.793 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.793 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.793 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.793 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.793 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.793 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.793 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.793 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.793 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.793 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.793 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.793 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.793 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.793 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.793 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.793 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.793 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.793 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.793 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.793 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.793 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.793 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.793 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.793 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.793 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.793 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.793 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.793 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.793 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.793 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.793 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.793 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.793 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.793 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.793 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.793 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.793 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.793 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.793 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.793 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.793 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.793 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.793 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.793 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.793 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.793 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.793 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.793 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.793 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.793 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.793 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.793 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.793 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.793 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.793 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.793 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.793 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.793 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.793 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.793 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.793 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.793 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.793 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.793 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.793 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.793 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.793 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.793 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.793 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.793 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.793 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.793 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.793 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.793 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.793 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.793 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.793 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.793 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.794 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.794 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.794 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.794 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.794 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.794 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.794 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.794 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.794 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.794 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.794 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.794 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.794 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.794 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.794 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.794 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.794 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.794 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.794 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.794 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.794 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.794 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.794 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.794 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.794 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.794 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.794 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.794 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.794 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.794 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.794 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.794 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.794 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.794 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.794 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:01.794 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:01.794 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:01.794 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:01.794 node0=1024 expecting 1024 00:05:01.794 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:01.794 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:01.794 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:01.794 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:01.794 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:05:01.794 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:05:01.794 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:05:01.794 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:01.794 00:09:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:02.360 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:02.623 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:02.623 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:02.623 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:02.623 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:02.623 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:05:02.623 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:05:02.623 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:05:02.623 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:02.623 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:02.623 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:02.623 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:02.623 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:02.623 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:02.623 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:02.623 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:02.623 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:02.623 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:02.623 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:02.623 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:02.623 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:02.623 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:02.623 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:02.623 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:02.623 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.623 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.623 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 6815032 kB' 'MemAvailable: 9503464 kB' 'Buffers: 2436 kB' 'Cached: 2892668 kB' 'SwapCached: 0 kB' 'Active: 458820 kB' 'Inactive: 2552604 kB' 'Active(anon): 126796 kB' 'Inactive(anon): 0 kB' 'Active(file): 332024 kB' 'Inactive(file): 2552604 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 216 kB' 'Writeback: 0 kB' 'AnonPages: 118152 kB' 'Mapped: 48332 kB' 'Shmem: 10476 kB' 'KReclaimable: 81484 kB' 'Slab: 166280 kB' 'SReclaimable: 81484 kB' 'SUnreclaim: 84796 kB' 'KernelStack: 6456 kB' 'PageTables: 3572 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 335912 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55204 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 178028 kB' 'DirectMap2M: 6113280 kB' 'DirectMap1G: 8388608 kB' 00:05:02.623 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.623 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.623 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.623 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.623 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.623 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.623 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.623 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.623 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.623 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.623 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.623 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.623 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.623 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.623 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.623 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.623 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.623 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.623 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.623 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.623 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.623 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.623 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.623 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.623 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.623 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.623 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.623 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.623 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.623 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.623 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.623 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.623 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.623 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.623 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.623 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.623 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.623 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.623 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.623 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.623 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.623 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.623 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.623 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.623 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.623 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.623 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.623 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.623 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.623 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.623 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.623 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.623 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.623 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.623 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.623 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.623 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.623 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.623 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.623 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.623 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.623 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.623 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.623 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.624 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.624 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.624 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.624 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.624 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.624 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.624 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.624 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.624 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.624 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.624 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.624 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.624 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.624 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.624 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.624 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.624 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.624 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.624 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.624 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.624 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.624 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.624 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.624 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.624 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.624 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.624 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.624 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.624 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.624 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.624 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.624 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.624 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.624 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.624 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.624 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.624 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.624 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.624 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.624 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.624 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.624 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.624 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.624 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.624 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.624 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.624 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.624 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.624 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.624 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.624 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.624 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.624 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.624 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.624 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.624 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.624 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.624 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.624 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.624 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.624 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.624 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.624 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.624 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.624 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.624 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.624 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.624 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.624 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.624 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.624 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.624 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.624 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.624 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.624 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.624 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.624 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.624 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.624 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.624 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.624 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.624 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.624 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.624 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.624 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.624 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.624 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.624 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.624 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.624 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.624 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.624 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.624 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.624 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.624 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.624 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.624 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.624 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:02.624 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:02.624 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:02.624 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:02.624 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:02.624 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:02.624 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:02.624 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:02.624 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:02.624 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:02.624 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:02.624 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:02.624 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:02.625 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 6814780 kB' 'MemAvailable: 9503212 kB' 'Buffers: 2436 kB' 'Cached: 2892668 kB' 'SwapCached: 0 kB' 'Active: 458692 kB' 'Inactive: 2552604 kB' 'Active(anon): 126668 kB' 'Inactive(anon): 0 kB' 'Active(file): 332024 kB' 'Inactive(file): 2552604 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'AnonPages: 117764 kB' 'Mapped: 48072 kB' 'Shmem: 10476 kB' 'KReclaimable: 81484 kB' 'Slab: 166284 kB' 'SReclaimable: 81484 kB' 'SUnreclaim: 84800 kB' 'KernelStack: 6448 kB' 'PageTables: 3688 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 335912 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55188 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 178028 kB' 'DirectMap2M: 6113280 kB' 'DirectMap1G: 8388608 kB' 00:05:02.625 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.625 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.625 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.625 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.625 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.625 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.625 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.625 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.625 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.625 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.625 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.625 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.625 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.625 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.625 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.625 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.625 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.625 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.625 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.625 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.625 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.625 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.625 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.625 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.625 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.625 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.625 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.625 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.625 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.625 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.625 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.625 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.625 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.625 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.625 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.625 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.625 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.625 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.625 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.625 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.625 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.625 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.625 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.625 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.625 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.625 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.625 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.625 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.625 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.625 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.625 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.625 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.625 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.625 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.625 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.625 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.625 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.625 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.625 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.625 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.625 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.625 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.625 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.625 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.625 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.625 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.625 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.625 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.625 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.625 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.625 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.625 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.625 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.625 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.625 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.625 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.625 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.625 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.625 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.625 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.625 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.625 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.625 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.625 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.625 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.625 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.625 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.625 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.625 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.625 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.625 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.625 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.625 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.625 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.625 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.625 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.625 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.625 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.625 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.625 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.625 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.625 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.625 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.625 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.625 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.625 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.625 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.625 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.625 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.626 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.626 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.626 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.626 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.626 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.626 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.626 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.626 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.626 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.626 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.626 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.626 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.626 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.626 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.626 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.626 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.626 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.626 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.626 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.626 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.626 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.626 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.626 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.626 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.626 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.626 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.626 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.626 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.626 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.626 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.626 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.626 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.626 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.626 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.626 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.626 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.626 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.626 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.626 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.626 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.626 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.626 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.626 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.626 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.626 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.626 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.626 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.626 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.626 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.626 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.626 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.626 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.626 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.626 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.626 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.626 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.626 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.626 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.626 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.626 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.626 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.626 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.626 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.626 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.626 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.626 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.626 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.626 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.626 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.626 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.626 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.626 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.626 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.626 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.626 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.626 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.626 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.626 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.626 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.626 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.626 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.626 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.626 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.626 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.626 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.626 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.626 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.626 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.626 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.626 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.626 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.626 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.626 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.626 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.626 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.626 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.626 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.626 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.626 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:02.626 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:02.626 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:02.626 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:02.626 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:02.626 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:02.626 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:02.626 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:02.626 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:02.626 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:02.626 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:02.626 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:02.626 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:02.626 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.626 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.627 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 6814780 kB' 'MemAvailable: 9503212 kB' 'Buffers: 2436 kB' 'Cached: 2892668 kB' 'SwapCached: 0 kB' 'Active: 458600 kB' 'Inactive: 2552604 kB' 'Active(anon): 126576 kB' 'Inactive(anon): 0 kB' 'Active(file): 332024 kB' 'Inactive(file): 2552604 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'AnonPages: 117936 kB' 'Mapped: 48072 kB' 'Shmem: 10476 kB' 'KReclaimable: 81484 kB' 'Slab: 166284 kB' 'SReclaimable: 81484 kB' 'SUnreclaim: 84800 kB' 'KernelStack: 6448 kB' 'PageTables: 3688 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 335912 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55188 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 178028 kB' 'DirectMap2M: 6113280 kB' 'DirectMap1G: 8388608 kB' 00:05:02.627 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.627 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.627 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.627 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.627 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.627 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.627 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.627 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.627 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.627 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.627 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.627 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.627 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.627 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.627 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.627 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.627 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.627 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.627 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.627 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.627 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.627 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.627 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.627 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.627 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.627 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.627 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.627 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.627 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.627 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.627 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.627 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.627 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.627 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.627 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.627 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.627 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.627 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.627 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.627 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.627 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.627 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.627 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.627 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.627 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.627 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.627 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.627 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.627 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.627 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.627 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.627 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.627 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.627 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.627 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.627 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.627 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.627 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.627 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.627 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.627 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.627 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.627 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.627 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.627 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.627 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.627 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.627 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.627 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.627 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.627 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.627 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.627 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.627 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.627 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.627 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.627 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.627 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.627 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.627 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.627 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.627 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.627 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.627 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.627 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.627 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.627 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.627 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.628 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.628 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.628 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.628 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.628 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.628 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.628 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.628 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.628 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.628 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.628 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.628 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.628 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.628 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.628 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.628 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.628 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.628 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.628 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.628 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.628 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.628 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.628 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.628 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.628 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.628 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.628 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.628 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.628 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.628 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.628 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.628 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.628 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.628 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.628 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.628 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.628 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.628 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.628 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.628 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.628 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.628 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.628 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.628 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.628 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.628 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.628 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.628 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.628 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.628 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.628 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.628 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.628 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.628 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.628 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.628 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.628 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.628 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.628 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.628 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.628 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.628 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.628 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.628 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.628 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.628 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.628 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.628 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.628 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.628 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.628 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.628 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.628 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.628 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.628 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.628 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.628 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.628 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.628 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.628 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.628 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.628 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.628 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.628 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.628 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.628 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.628 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.628 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.628 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.628 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.628 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.628 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.628 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.628 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.628 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.628 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.628 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.628 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.628 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.628 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.628 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.628 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.628 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.628 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.628 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.628 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.628 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.628 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.628 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.628 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.628 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.628 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.628 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.628 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:02.628 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:02.628 nr_hugepages=1024 00:05:02.628 resv_hugepages=0 00:05:02.628 surplus_hugepages=0 00:05:02.629 anon_hugepages=0 00:05:02.629 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:02.629 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:02.629 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:02.629 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:02.629 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:02.629 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:02.629 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:02.629 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:02.629 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:02.629 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:02.629 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:02.629 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:02.629 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:02.629 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:02.629 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:02.629 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:02.629 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:02.629 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.629 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.629 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 6814780 kB' 'MemAvailable: 9503212 kB' 'Buffers: 2436 kB' 'Cached: 2892668 kB' 'SwapCached: 0 kB' 'Active: 458688 kB' 'Inactive: 2552604 kB' 'Active(anon): 126664 kB' 'Inactive(anon): 0 kB' 'Active(file): 332024 kB' 'Inactive(file): 2552604 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'AnonPages: 117764 kB' 'Mapped: 48072 kB' 'Shmem: 10476 kB' 'KReclaimable: 81484 kB' 'Slab: 166280 kB' 'SReclaimable: 81484 kB' 'SUnreclaim: 84796 kB' 'KernelStack: 6448 kB' 'PageTables: 3688 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 335912 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55188 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 178028 kB' 'DirectMap2M: 6113280 kB' 'DirectMap1G: 8388608 kB' 00:05:02.629 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.629 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.629 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.629 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.629 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.629 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.629 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.629 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.629 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.629 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.629 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.629 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.629 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.629 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.629 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.629 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.629 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.629 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.629 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.629 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.629 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.629 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.629 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.629 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.629 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.629 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.629 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.629 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.629 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.629 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.629 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.629 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.629 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.629 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.629 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.629 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.629 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.629 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.629 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.629 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.629 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.629 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.629 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.629 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.629 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.629 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.629 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.629 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.629 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.629 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.629 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.629 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.629 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.629 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.629 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.629 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.629 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.629 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.629 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.629 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.629 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.629 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.629 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.629 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.629 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.629 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.629 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.629 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.629 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.629 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.629 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.629 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.629 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.629 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.629 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.629 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.629 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.629 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.629 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.629 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.629 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.629 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.629 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.629 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.630 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.630 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.630 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.630 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.630 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.630 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.630 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.630 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.630 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.630 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.630 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.630 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.630 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.630 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.630 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.630 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.630 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.630 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.630 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.630 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.630 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.630 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.630 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.630 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.630 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.630 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.630 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.630 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.630 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.630 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.630 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.630 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.630 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.630 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.630 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.630 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.630 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.630 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.630 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.630 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.630 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.630 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.630 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.630 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.630 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.630 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.630 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.630 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.630 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.630 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.630 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.630 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.630 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.630 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.630 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.630 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.630 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.630 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.630 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.630 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.630 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.630 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.630 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.630 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.630 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.630 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.630 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.630 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.630 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.630 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.630 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.630 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.630 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.630 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.630 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.630 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.630 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.630 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.630 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.630 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.630 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.630 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.630 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.630 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.630 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.630 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.630 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.630 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.630 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.630 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.630 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.630 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.630 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.630 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.630 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.630 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.630 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.630 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.630 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.630 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.630 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.630 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.630 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.630 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.630 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.630 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.630 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.630 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.630 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.630 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:05:02.630 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:02.630 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:02.630 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:02.630 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:05:02.630 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:02.630 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:02.630 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:02.630 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:02.631 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:02.631 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:02.631 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:02.631 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:02.631 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:05:02.631 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:02.631 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:02.631 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:02.631 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:02.631 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:02.631 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:02.631 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:02.631 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.631 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 6814780 kB' 'MemUsed: 5427200 kB' 'SwapCached: 0 kB' 'Active: 458692 kB' 'Inactive: 2552604 kB' 'Active(anon): 126668 kB' 'Inactive(anon): 0 kB' 'Active(file): 332024 kB' 'Inactive(file): 2552604 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'FilePages: 2895104 kB' 'Mapped: 48072 kB' 'AnonPages: 117764 kB' 'Shmem: 10476 kB' 'KernelStack: 6448 kB' 'PageTables: 3688 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 81484 kB' 'Slab: 166280 kB' 'SReclaimable: 81484 kB' 'SUnreclaim: 84796 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:02.631 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.631 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.631 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.631 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.631 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.631 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.631 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.631 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.631 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.631 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.631 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.631 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.631 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.631 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.631 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.631 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.631 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.631 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.631 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.631 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.631 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.631 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.631 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.631 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.631 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.631 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.631 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.631 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.631 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.631 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.631 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.631 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.631 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.631 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.631 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.631 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.631 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.631 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.631 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.631 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.631 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.631 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.631 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.631 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.631 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.631 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.631 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.631 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.631 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.631 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.631 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.631 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.631 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.631 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.631 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.631 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.631 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.631 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.631 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.631 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.631 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.631 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.631 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.631 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.631 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.631 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.631 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.631 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.631 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.631 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.631 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.631 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.631 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.631 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.631 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.631 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.631 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.631 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.631 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.631 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.631 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.631 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.631 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.631 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.890 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.890 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.890 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.890 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.890 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.890 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.890 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.890 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.890 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.890 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.890 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.890 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.890 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.890 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.890 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.890 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.890 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.890 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.890 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.890 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.890 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.890 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.890 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.890 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.890 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.890 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.890 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.890 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.890 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.890 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.890 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.890 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.890 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.890 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.890 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.890 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.890 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.890 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.890 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.890 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.890 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.890 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.890 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.890 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.890 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.890 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.890 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.890 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.890 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.890 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.890 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.890 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.890 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.890 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.890 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.890 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.890 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.890 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.890 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.890 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.890 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.890 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.890 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:02.891 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:02.891 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:02.891 node0=1024 expecting 1024 00:05:02.891 ************************************ 00:05:02.891 END TEST no_shrink_alloc 00:05:02.891 ************************************ 00:05:02.891 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:02.891 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:02.891 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:02.891 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:02.891 00:09:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:02.891 00:05:02.891 real 0m2.003s 00:05:02.891 user 0m0.872s 00:05:02.891 sys 0m1.231s 00:05:02.891 00:09:17 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:02.891 00:09:17 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:05:02.891 00:09:17 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:05:02.891 00:09:17 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:05:02.891 00:09:17 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:05:02.891 00:09:17 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:02.891 00:09:17 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:02.891 00:09:17 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:02.891 00:09:17 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:02.891 00:09:17 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:05:02.891 00:09:17 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:05:02.891 00:05:02.891 real 0m8.568s 00:05:02.891 user 0m3.494s 00:05:02.891 sys 0m5.288s 00:05:02.891 ************************************ 00:05:02.891 END TEST hugepages 00:05:02.891 ************************************ 00:05:02.891 00:09:17 setup.sh.hugepages -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:02.891 00:09:17 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:02.891 00:09:17 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:05:02.891 00:09:17 setup.sh -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:02.891 00:09:17 setup.sh -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:02.891 00:09:17 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:02.891 ************************************ 00:05:02.891 START TEST driver 00:05:02.891 ************************************ 00:05:02.891 00:09:17 setup.sh.driver -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:05:03.150 * Looking for test storage... 00:05:03.150 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:05:03.150 00:09:17 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:05:03.150 00:09:17 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:03.150 00:09:17 setup.sh.driver -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:09.718 00:09:23 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:05:09.718 00:09:23 setup.sh.driver -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:09.718 00:09:23 setup.sh.driver -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:09.718 00:09:23 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:05:09.718 ************************************ 00:05:09.718 START TEST guess_driver 00:05:09.718 ************************************ 00:05:09.718 00:09:23 setup.sh.driver.guess_driver -- common/autotest_common.sh@1121 -- # guess_driver 00:05:09.718 00:09:23 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:05:09.718 00:09:23 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:05:09.718 00:09:23 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:05:09.718 00:09:23 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:05:09.718 00:09:23 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:05:09.718 00:09:23 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:05:09.718 00:09:23 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:05:09.718 00:09:23 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:05:09.718 00:09:23 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 0 > 0 )) 00:05:09.718 00:09:23 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # [[ '' == Y ]] 00:05:09.718 00:09:23 setup.sh.driver.guess_driver -- setup/driver.sh@32 -- # return 1 00:05:09.718 00:09:23 setup.sh.driver.guess_driver -- setup/driver.sh@38 -- # uio 00:05:09.718 00:09:23 setup.sh.driver.guess_driver -- setup/driver.sh@17 -- # is_driver uio_pci_generic 00:05:09.718 00:09:23 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod uio_pci_generic 00:05:09.718 00:09:23 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep uio_pci_generic 00:05:09.718 00:09:23 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends uio_pci_generic 00:05:09.718 00:09:23 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/uio/uio.ko.xz 00:05:09.718 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/uio/uio_pci_generic.ko.xz == *\.\k\o* ]] 00:05:09.718 00:09:23 setup.sh.driver.guess_driver -- setup/driver.sh@39 -- # echo uio_pci_generic 00:05:09.718 00:09:23 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=uio_pci_generic 00:05:09.718 00:09:23 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ uio_pci_generic == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:05:09.718 Looking for driver=uio_pci_generic 00:05:09.718 00:09:23 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=uio_pci_generic' 00:05:09.718 00:09:23 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:09.718 00:09:23 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:05:09.718 00:09:23 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:05:09.718 00:09:23 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:09.978 00:09:24 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ devices: == \-\> ]] 00:05:09.978 00:09:24 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # continue 00:05:09.978 00:09:24 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:10.913 00:09:25 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:10.913 00:09:25 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:05:10.913 00:09:25 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:10.913 00:09:25 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:10.913 00:09:25 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:05:10.913 00:09:25 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:10.913 00:09:25 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:10.913 00:09:25 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:05:10.913 00:09:25 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:10.913 00:09:25 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:10.913 00:09:25 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:05:10.913 00:09:25 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:10.913 00:09:25 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:05:10.913 00:09:25 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:05:10.913 00:09:25 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:10.913 00:09:25 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:17.476 00:05:17.477 real 0m7.822s 00:05:17.477 user 0m0.929s 00:05:17.477 sys 0m2.065s 00:05:17.477 00:09:31 setup.sh.driver.guess_driver -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:17.477 00:09:31 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:05:17.477 ************************************ 00:05:17.477 END TEST guess_driver 00:05:17.477 ************************************ 00:05:17.477 00:05:17.477 real 0m14.343s 00:05:17.477 user 0m1.432s 00:05:17.477 sys 0m3.263s 00:05:17.477 00:09:31 setup.sh.driver -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:17.477 00:09:31 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:05:17.477 ************************************ 00:05:17.477 END TEST driver 00:05:17.477 ************************************ 00:05:17.477 00:09:31 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:05:17.477 00:09:31 setup.sh -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:17.477 00:09:31 setup.sh -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:17.477 00:09:31 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:17.477 ************************************ 00:05:17.477 START TEST devices 00:05:17.477 ************************************ 00:05:17.477 00:09:31 setup.sh.devices -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:05:17.477 * Looking for test storage... 00:05:17.477 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:05:17.477 00:09:31 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:05:17.477 00:09:31 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:05:17.477 00:09:31 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:17.477 00:09:31 setup.sh.devices -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:18.853 00:09:33 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:05:18.853 00:09:33 setup.sh.devices -- common/autotest_common.sh@1665 -- # zoned_devs=() 00:05:18.853 00:09:33 setup.sh.devices -- common/autotest_common.sh@1665 -- # local -gA zoned_devs 00:05:18.853 00:09:33 setup.sh.devices -- common/autotest_common.sh@1666 -- # local nvme bdf 00:05:18.853 00:09:33 setup.sh.devices -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:05:18.853 00:09:33 setup.sh.devices -- common/autotest_common.sh@1669 -- # is_block_zoned nvme0n1 00:05:18.853 00:09:33 setup.sh.devices -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:05:18.853 00:09:33 setup.sh.devices -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:18.853 00:09:33 setup.sh.devices -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:05:18.853 00:09:33 setup.sh.devices -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:05:18.853 00:09:33 setup.sh.devices -- common/autotest_common.sh@1669 -- # is_block_zoned nvme1n1 00:05:18.853 00:09:33 setup.sh.devices -- common/autotest_common.sh@1658 -- # local device=nvme1n1 00:05:18.853 00:09:33 setup.sh.devices -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:05:18.853 00:09:33 setup.sh.devices -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:05:18.853 00:09:33 setup.sh.devices -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:05:18.853 00:09:33 setup.sh.devices -- common/autotest_common.sh@1669 -- # is_block_zoned nvme2n1 00:05:18.853 00:09:33 setup.sh.devices -- common/autotest_common.sh@1658 -- # local device=nvme2n1 00:05:18.853 00:09:33 setup.sh.devices -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:05:18.853 00:09:33 setup.sh.devices -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:05:18.853 00:09:33 setup.sh.devices -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:05:18.853 00:09:33 setup.sh.devices -- common/autotest_common.sh@1669 -- # is_block_zoned nvme2n2 00:05:18.853 00:09:33 setup.sh.devices -- common/autotest_common.sh@1658 -- # local device=nvme2n2 00:05:18.853 00:09:33 setup.sh.devices -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:05:18.853 00:09:33 setup.sh.devices -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:05:18.853 00:09:33 setup.sh.devices -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:05:18.853 00:09:33 setup.sh.devices -- common/autotest_common.sh@1669 -- # is_block_zoned nvme2n3 00:05:18.853 00:09:33 setup.sh.devices -- common/autotest_common.sh@1658 -- # local device=nvme2n3 00:05:18.853 00:09:33 setup.sh.devices -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:05:18.853 00:09:33 setup.sh.devices -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:05:18.853 00:09:33 setup.sh.devices -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:05:18.853 00:09:33 setup.sh.devices -- common/autotest_common.sh@1669 -- # is_block_zoned nvme3c3n1 00:05:18.853 00:09:33 setup.sh.devices -- common/autotest_common.sh@1658 -- # local device=nvme3c3n1 00:05:18.853 00:09:33 setup.sh.devices -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:05:18.853 00:09:33 setup.sh.devices -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:05:18.853 00:09:33 setup.sh.devices -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:05:18.853 00:09:33 setup.sh.devices -- common/autotest_common.sh@1669 -- # is_block_zoned nvme3n1 00:05:18.853 00:09:33 setup.sh.devices -- common/autotest_common.sh@1658 -- # local device=nvme3n1 00:05:18.853 00:09:33 setup.sh.devices -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:05:18.853 00:09:33 setup.sh.devices -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:05:18.853 00:09:33 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:05:18.853 00:09:33 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:05:18.853 00:09:33 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:05:18.853 00:09:33 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:05:18.853 00:09:33 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:05:18.853 00:09:33 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:05:18.853 00:09:33 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:05:18.853 00:09:33 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:05:18.853 00:09:33 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:11.0 00:05:18.853 00:09:33 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:05:18.853 00:09:33 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:05:18.853 00:09:33 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:05:18.853 00:09:33 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:05:18.853 No valid GPT data, bailing 00:05:18.853 00:09:33 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:18.853 00:09:33 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:05:18.853 00:09:33 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:05:18.853 00:09:33 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:05:18.853 00:09:33 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:05:18.853 00:09:33 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:05:18.853 00:09:33 setup.sh.devices -- setup/common.sh@80 -- # echo 5368709120 00:05:18.853 00:09:33 setup.sh.devices -- setup/devices.sh@204 -- # (( 5368709120 >= min_disk_size )) 00:05:18.853 00:09:33 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:05:18.853 00:09:33 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:11.0 00:05:18.853 00:09:33 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:05:18.853 00:09:33 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme1n1 00:05:18.853 00:09:33 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme1 00:05:18.853 00:09:33 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:10.0 00:05:18.853 00:09:33 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\0\.\0* ]] 00:05:18.853 00:09:33 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme1n1 00:05:18.853 00:09:33 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:05:18.853 00:09:33 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:05:18.853 No valid GPT data, bailing 00:05:19.112 00:09:33 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:05:19.112 00:09:33 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:05:19.112 00:09:33 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:05:19.112 00:09:33 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n1 00:05:19.112 00:09:33 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme1n1 00:05:19.112 00:09:33 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n1 ]] 00:05:19.112 00:09:33 setup.sh.devices -- setup/common.sh@80 -- # echo 6343335936 00:05:19.112 00:09:33 setup.sh.devices -- setup/devices.sh@204 -- # (( 6343335936 >= min_disk_size )) 00:05:19.112 00:09:33 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:05:19.112 00:09:33 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:10.0 00:05:19.112 00:09:33 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:05:19.112 00:09:33 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme2n1 00:05:19.112 00:09:33 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme2 00:05:19.112 00:09:33 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:12.0 00:05:19.112 00:09:33 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\2\.\0* ]] 00:05:19.112 00:09:33 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme2n1 00:05:19.112 00:09:33 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme2n1 pt 00:05:19.112 00:09:33 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme2n1 00:05:19.112 No valid GPT data, bailing 00:05:19.112 00:09:33 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme2n1 00:05:19.112 00:09:33 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:05:19.112 00:09:33 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:05:19.112 00:09:33 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme2n1 00:05:19.112 00:09:33 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme2n1 00:05:19.112 00:09:33 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme2n1 ]] 00:05:19.112 00:09:33 setup.sh.devices -- setup/common.sh@80 -- # echo 4294967296 00:05:19.112 00:09:33 setup.sh.devices -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:05:19.112 00:09:33 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:05:19.112 00:09:33 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:12.0 00:05:19.112 00:09:33 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:05:19.112 00:09:33 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme2n2 00:05:19.112 00:09:33 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme2 00:05:19.112 00:09:33 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:12.0 00:05:19.112 00:09:33 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\2\.\0* ]] 00:05:19.112 00:09:33 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme2n2 00:05:19.112 00:09:33 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme2n2 pt 00:05:19.112 00:09:33 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme2n2 00:05:19.112 No valid GPT data, bailing 00:05:19.112 00:09:33 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme2n2 00:05:19.112 00:09:33 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:05:19.112 00:09:33 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:05:19.112 00:09:33 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme2n2 00:05:19.112 00:09:33 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme2n2 00:05:19.112 00:09:33 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme2n2 ]] 00:05:19.112 00:09:33 setup.sh.devices -- setup/common.sh@80 -- # echo 4294967296 00:05:19.112 00:09:33 setup.sh.devices -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:05:19.112 00:09:33 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:05:19.112 00:09:33 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:12.0 00:05:19.112 00:09:33 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:05:19.112 00:09:33 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme2n3 00:05:19.112 00:09:33 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme2 00:05:19.112 00:09:33 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:12.0 00:05:19.112 00:09:33 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\2\.\0* ]] 00:05:19.112 00:09:33 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme2n3 00:05:19.112 00:09:33 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme2n3 pt 00:05:19.112 00:09:33 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme2n3 00:05:19.112 No valid GPT data, bailing 00:05:19.112 00:09:33 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme2n3 00:05:19.112 00:09:33 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:05:19.112 00:09:33 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:05:19.112 00:09:33 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme2n3 00:05:19.112 00:09:33 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme2n3 00:05:19.112 00:09:33 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme2n3 ]] 00:05:19.112 00:09:33 setup.sh.devices -- setup/common.sh@80 -- # echo 4294967296 00:05:19.112 00:09:33 setup.sh.devices -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:05:19.112 00:09:33 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:05:19.112 00:09:33 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:12.0 00:05:19.112 00:09:33 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:05:19.112 00:09:33 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme3n1 00:05:19.112 00:09:33 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme3 00:05:19.112 00:09:33 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:13.0 00:05:19.112 00:09:33 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\3\.\0* ]] 00:05:19.112 00:09:33 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme3n1 00:05:19.112 00:09:33 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme3n1 pt 00:05:19.112 00:09:33 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme3n1 00:05:19.370 No valid GPT data, bailing 00:05:19.371 00:09:33 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme3n1 00:05:19.371 00:09:33 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:05:19.371 00:09:33 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:05:19.371 00:09:33 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme3n1 00:05:19.371 00:09:33 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme3n1 00:05:19.371 00:09:33 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme3n1 ]] 00:05:19.371 00:09:33 setup.sh.devices -- setup/common.sh@80 -- # echo 1073741824 00:05:19.371 00:09:33 setup.sh.devices -- setup/devices.sh@204 -- # (( 1073741824 >= min_disk_size )) 00:05:19.371 00:09:33 setup.sh.devices -- setup/devices.sh@209 -- # (( 5 > 0 )) 00:05:19.371 00:09:33 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:05:19.371 00:09:33 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:05:19.371 00:09:33 setup.sh.devices -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:19.371 00:09:33 setup.sh.devices -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:19.371 00:09:33 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:05:19.371 ************************************ 00:05:19.371 START TEST nvme_mount 00:05:19.371 ************************************ 00:05:19.371 00:09:33 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1121 -- # nvme_mount 00:05:19.371 00:09:33 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:05:19.371 00:09:33 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:05:19.371 00:09:33 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:19.371 00:09:33 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:19.371 00:09:33 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:05:19.371 00:09:33 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:05:19.371 00:09:33 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:05:19.371 00:09:33 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:05:19.371 00:09:33 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:05:19.371 00:09:33 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:05:19.371 00:09:33 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:05:19.371 00:09:33 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:05:19.371 00:09:33 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:19.371 00:09:33 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:19.371 00:09:33 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:05:19.371 00:09:33 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:19.371 00:09:33 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 4096 )) 00:05:19.371 00:09:33 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:05:19.371 00:09:33 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:05:20.304 Creating new GPT entries in memory. 00:05:20.304 GPT data structures destroyed! You may now partition the disk using fdisk or 00:05:20.304 other utilities. 00:05:20.304 00:09:34 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:05:20.304 00:09:34 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:20.304 00:09:34 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:20.304 00:09:34 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:20.304 00:09:34 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:05:21.717 Creating new GPT entries in memory. 00:05:21.717 The operation has completed successfully. 00:05:21.718 00:09:35 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:05:21.718 00:09:35 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:21.718 00:09:35 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 71298 00:05:21.718 00:09:35 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:21.718 00:09:35 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size= 00:05:21.718 00:09:35 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:21.718 00:09:35 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:05:21.718 00:09:35 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:05:21.718 00:09:35 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:21.718 00:09:36 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:00:11.0 nvme0n1:nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:21.718 00:09:36 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:05:21.718 00:09:36 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:05:21.718 00:09:36 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:21.718 00:09:36 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:21.718 00:09:36 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:05:21.718 00:09:36 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:21.718 00:09:36 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:05:21.718 00:09:36 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:05:21.718 00:09:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:21.718 00:09:36 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:05:21.718 00:09:36 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:05:21.718 00:09:36 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:21.718 00:09:36 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:21.718 00:09:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:21.718 00:09:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:05:21.718 00:09:36 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:05:21.718 00:09:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:21.718 00:09:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:21.718 00:09:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:21.977 00:09:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:21.977 00:09:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:21.977 00:09:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:21.977 00:09:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:21.977 00:09:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:12.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:21.977 00:09:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:22.543 00:09:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:13.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:22.543 00:09:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:22.543 00:09:37 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:22.543 00:09:37 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:05:22.543 00:09:37 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:22.543 00:09:37 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:22.543 00:09:37 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:22.801 00:09:37 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:05:22.801 00:09:37 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:22.801 00:09:37 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:22.801 00:09:37 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:22.801 00:09:37 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:05:22.801 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:22.801 00:09:37 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:22.801 00:09:37 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:23.060 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:05:23.060 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:05:23.060 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:05:23.060 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:05:23.060 00:09:37 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 1024M 00:05:23.060 00:09:37 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size=1024M 00:05:23.060 00:09:37 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:23.060 00:09:37 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:05:23.060 00:09:37 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:05:23.060 00:09:37 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:23.060 00:09:37 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:00:11.0 nvme0n1:nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:23.060 00:09:37 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:05:23.060 00:09:37 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:05:23.060 00:09:37 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:23.060 00:09:37 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:23.060 00:09:37 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:05:23.060 00:09:37 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:23.060 00:09:37 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:05:23.060 00:09:37 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:05:23.060 00:09:37 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:23.060 00:09:37 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:05:23.060 00:09:37 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:05:23.060 00:09:37 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:23.060 00:09:37 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:23.319 00:09:37 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:23.319 00:09:37 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:05:23.319 00:09:37 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:05:23.319 00:09:37 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:23.319 00:09:37 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:23.319 00:09:37 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:23.578 00:09:38 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:23.578 00:09:38 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:23.578 00:09:38 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:23.578 00:09:38 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:23.578 00:09:38 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:12.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:23.578 00:09:38 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:24.145 00:09:38 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:13.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:24.145 00:09:38 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:24.145 00:09:38 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:24.145 00:09:38 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:05:24.145 00:09:38 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:24.145 00:09:38 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:24.145 00:09:38 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:24.145 00:09:38 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:24.145 00:09:38 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:00:11.0 data@nvme0n1 '' '' 00:05:24.145 00:09:38 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:05:24.145 00:09:38 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:05:24.145 00:09:38 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:05:24.145 00:09:38 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:05:24.145 00:09:38 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:05:24.145 00:09:38 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:05:24.145 00:09:38 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:05:24.145 00:09:38 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:24.145 00:09:38 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:05:24.145 00:09:38 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:05:24.145 00:09:38 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:24.145 00:09:38 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:24.713 00:09:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:24.713 00:09:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:05:24.713 00:09:39 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:05:24.713 00:09:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:24.713 00:09:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:24.713 00:09:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:24.713 00:09:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:24.713 00:09:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:24.972 00:09:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:24.972 00:09:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:24.972 00:09:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:12.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:24.972 00:09:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:25.231 00:09:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:13.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:25.231 00:09:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:25.489 00:09:40 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:25.489 00:09:40 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:25.489 00:09:40 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:05:25.489 00:09:40 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:05:25.489 00:09:40 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:25.489 00:09:40 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:25.489 00:09:40 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:25.489 00:09:40 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:25.489 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:25.489 00:05:25.489 real 0m6.224s 00:05:25.489 user 0m1.587s 00:05:25.489 sys 0m2.319s 00:05:25.489 00:09:40 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:25.489 00:09:40 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:05:25.489 ************************************ 00:05:25.490 END TEST nvme_mount 00:05:25.490 ************************************ 00:05:25.490 00:09:40 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:05:25.490 00:09:40 setup.sh.devices -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:25.490 00:09:40 setup.sh.devices -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:25.490 00:09:40 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:05:25.490 ************************************ 00:05:25.490 START TEST dm_mount 00:05:25.490 ************************************ 00:05:25.490 00:09:40 setup.sh.devices.dm_mount -- common/autotest_common.sh@1121 -- # dm_mount 00:05:25.490 00:09:40 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:05:25.490 00:09:40 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:05:25.490 00:09:40 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:05:25.490 00:09:40 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:05:25.490 00:09:40 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:05:25.748 00:09:40 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:05:25.748 00:09:40 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:05:25.748 00:09:40 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:05:25.748 00:09:40 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:05:25.748 00:09:40 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:05:25.748 00:09:40 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:05:25.748 00:09:40 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:25.748 00:09:40 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:25.748 00:09:40 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:05:25.748 00:09:40 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:25.748 00:09:40 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:25.748 00:09:40 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:05:25.748 00:09:40 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:25.748 00:09:40 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 4096 )) 00:05:25.748 00:09:40 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:05:25.748 00:09:40 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:05:26.683 Creating new GPT entries in memory. 00:05:26.683 GPT data structures destroyed! You may now partition the disk using fdisk or 00:05:26.683 other utilities. 00:05:26.683 00:09:41 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:05:26.683 00:09:41 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:26.683 00:09:41 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:26.683 00:09:41 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:26.683 00:09:41 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:05:27.618 Creating new GPT entries in memory. 00:05:27.618 The operation has completed successfully. 00:05:27.618 00:09:42 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:05:27.618 00:09:42 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:27.618 00:09:42 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:27.618 00:09:42 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:27.618 00:09:42 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:264192:526335 00:05:28.994 The operation has completed successfully. 00:05:28.994 00:09:43 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:05:28.994 00:09:43 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:28.994 00:09:43 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 71935 00:05:28.994 00:09:43 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:05:28.994 00:09:43 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:28.994 00:09:43 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:05:28.994 00:09:43 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:05:28.994 00:09:43 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:05:28.994 00:09:43 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:28.994 00:09:43 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:05:28.994 00:09:43 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:28.994 00:09:43 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:05:28.994 00:09:43 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:05:28.994 00:09:43 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-0 00:05:28.994 00:09:43 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:05:28.994 00:09:43 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:05:28.994 00:09:43 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:28.994 00:09:43 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount size= 00:05:28.994 00:09:43 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:28.994 00:09:43 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:28.994 00:09:43 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:05:28.994 00:09:43 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:28.994 00:09:43 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:00:11.0 nvme0n1:nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:05:28.994 00:09:43 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:05:28.994 00:09:43 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:05:28.994 00:09:43 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:28.994 00:09:43 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:05:28.994 00:09:43 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:05:28.994 00:09:43 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:05:28.994 00:09:43 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:05:28.994 00:09:43 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:05:28.994 00:09:43 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:28.994 00:09:43 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:05:28.994 00:09:43 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:05:28.994 00:09:43 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:28.994 00:09:43 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:29.253 00:09:43 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:29.253 00:09:43 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:05:29.253 00:09:43 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:05:29.253 00:09:43 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:29.253 00:09:43 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:29.253 00:09:43 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:29.253 00:09:43 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:29.253 00:09:43 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:29.511 00:09:44 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:29.511 00:09:44 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:29.511 00:09:44 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:12.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:29.511 00:09:44 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:29.770 00:09:44 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:13.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:29.770 00:09:44 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:30.029 00:09:44 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:30.029 00:09:44 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount ]] 00:05:30.029 00:09:44 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:30.029 00:09:44 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:05:30.029 00:09:44 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:05:30.029 00:09:44 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:30.029 00:09:44 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:00:11.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:05:30.029 00:09:44 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:05:30.029 00:09:44 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:05:30.029 00:09:44 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:05:30.029 00:09:44 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:05:30.029 00:09:44 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:05:30.029 00:09:44 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:05:30.029 00:09:44 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:05:30.029 00:09:44 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:30.029 00:09:44 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:05:30.029 00:09:44 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:05:30.029 00:09:44 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:30.029 00:09:44 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:30.597 00:09:44 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:30.597 00:09:44 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:05:30.597 00:09:44 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:05:30.597 00:09:44 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:30.597 00:09:44 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:30.597 00:09:44 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:30.597 00:09:45 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:30.597 00:09:45 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:30.856 00:09:45 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:30.856 00:09:45 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:30.856 00:09:45 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:12.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:30.856 00:09:45 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:31.114 00:09:45 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:13.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:31.114 00:09:45 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:31.374 00:09:45 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:31.374 00:09:45 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:31.374 00:09:45 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:05:31.374 00:09:45 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:05:31.374 00:09:45 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:31.374 00:09:45 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:31.374 00:09:45 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:05:31.374 00:09:45 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:31.374 00:09:45 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:05:31.374 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:31.374 00:09:45 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:31.374 00:09:45 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:05:31.374 00:05:31.374 real 0m5.790s 00:05:31.374 user 0m1.103s 00:05:31.374 sys 0m1.591s 00:05:31.374 00:09:45 setup.sh.devices.dm_mount -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:31.374 00:09:45 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:05:31.374 ************************************ 00:05:31.374 END TEST dm_mount 00:05:31.374 ************************************ 00:05:31.374 00:09:46 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:05:31.374 00:09:46 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:05:31.374 00:09:46 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:31.374 00:09:46 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:31.374 00:09:46 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:05:31.374 00:09:46 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:31.374 00:09:46 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:31.633 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:05:31.633 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:05:31.633 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:05:31.633 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:05:31.633 00:09:46 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:05:31.633 00:09:46 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:31.633 00:09:46 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:31.633 00:09:46 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:31.633 00:09:46 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:31.633 00:09:46 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:05:31.633 00:09:46 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:05:31.892 00:05:31.892 real 0m14.464s 00:05:31.892 user 0m3.688s 00:05:31.892 sys 0m5.066s 00:05:31.892 00:09:46 setup.sh.devices -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:31.892 00:09:46 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:05:31.892 ************************************ 00:05:31.892 END TEST devices 00:05:31.892 ************************************ 00:05:31.892 ************************************ 00:05:31.892 END TEST setup.sh 00:05:31.892 ************************************ 00:05:31.892 00:05:31.892 real 0m52.089s 00:05:31.892 user 0m12.373s 00:05:31.892 sys 0m19.747s 00:05:31.892 00:09:46 setup.sh -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:31.892 00:09:46 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:31.892 00:09:46 -- spdk/autotest.sh@128 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:05:32.487 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:33.090 Hugepages 00:05:33.090 node hugesize free / total 00:05:33.090 node0 1048576kB 0 / 0 00:05:33.090 node0 2048kB 2048 / 2048 00:05:33.090 00:05:33.090 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:33.090 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:05:33.347 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:05:33.347 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:05:33.605 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme2 nvme2n1 nvme2n2 nvme2n3 00:05:33.605 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:05:33.605 00:09:48 -- spdk/autotest.sh@130 -- # uname -s 00:05:33.605 00:09:48 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:05:33.605 00:09:48 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:05:33.605 00:09:48 -- common/autotest_common.sh@1527 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:34.541 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:35.109 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:35.109 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:05:35.109 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:05:35.109 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:05:35.367 00:09:49 -- common/autotest_common.sh@1528 -- # sleep 1 00:05:36.304 00:09:50 -- common/autotest_common.sh@1529 -- # bdfs=() 00:05:36.304 00:09:50 -- common/autotest_common.sh@1529 -- # local bdfs 00:05:36.304 00:09:50 -- common/autotest_common.sh@1530 -- # bdfs=($(get_nvme_bdfs)) 00:05:36.304 00:09:50 -- common/autotest_common.sh@1530 -- # get_nvme_bdfs 00:05:36.304 00:09:50 -- common/autotest_common.sh@1509 -- # bdfs=() 00:05:36.304 00:09:50 -- common/autotest_common.sh@1509 -- # local bdfs 00:05:36.304 00:09:50 -- common/autotest_common.sh@1510 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:36.304 00:09:50 -- common/autotest_common.sh@1510 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:36.304 00:09:50 -- common/autotest_common.sh@1510 -- # jq -r '.config[].params.traddr' 00:05:36.304 00:09:50 -- common/autotest_common.sh@1511 -- # (( 4 == 0 )) 00:05:36.304 00:09:50 -- common/autotest_common.sh@1515 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:05:36.304 00:09:50 -- common/autotest_common.sh@1532 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:36.892 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:37.150 Waiting for block devices as requested 00:05:37.409 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:05:37.409 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:05:37.409 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:05:37.668 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:05:42.940 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:05:42.940 00:09:57 -- common/autotest_common.sh@1534 -- # for bdf in "${bdfs[@]}" 00:05:42.940 00:09:57 -- common/autotest_common.sh@1535 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:05:42.940 00:09:57 -- common/autotest_common.sh@1498 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:05:42.940 00:09:57 -- common/autotest_common.sh@1498 -- # grep 0000:00:10.0/nvme/nvme 00:05:42.940 00:09:57 -- common/autotest_common.sh@1498 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:05:42.940 00:09:57 -- common/autotest_common.sh@1499 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:05:42.940 00:09:57 -- common/autotest_common.sh@1503 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:05:42.940 00:09:57 -- common/autotest_common.sh@1503 -- # printf '%s\n' nvme1 00:05:42.941 00:09:57 -- common/autotest_common.sh@1535 -- # nvme_ctrlr=/dev/nvme1 00:05:42.941 00:09:57 -- common/autotest_common.sh@1536 -- # [[ -z /dev/nvme1 ]] 00:05:42.941 00:09:57 -- common/autotest_common.sh@1541 -- # nvme id-ctrl /dev/nvme1 00:05:42.941 00:09:57 -- common/autotest_common.sh@1541 -- # grep oacs 00:05:42.941 00:09:57 -- common/autotest_common.sh@1541 -- # cut -d: -f2 00:05:42.941 00:09:57 -- common/autotest_common.sh@1541 -- # oacs=' 0x12a' 00:05:42.941 00:09:57 -- common/autotest_common.sh@1542 -- # oacs_ns_manage=8 00:05:42.941 00:09:57 -- common/autotest_common.sh@1544 -- # [[ 8 -ne 0 ]] 00:05:42.941 00:09:57 -- common/autotest_common.sh@1550 -- # nvme id-ctrl /dev/nvme1 00:05:42.941 00:09:57 -- common/autotest_common.sh@1550 -- # grep unvmcap 00:05:42.941 00:09:57 -- common/autotest_common.sh@1550 -- # cut -d: -f2 00:05:42.941 00:09:57 -- common/autotest_common.sh@1550 -- # unvmcap=' 0' 00:05:42.941 00:09:57 -- common/autotest_common.sh@1551 -- # [[ 0 -eq 0 ]] 00:05:42.941 00:09:57 -- common/autotest_common.sh@1553 -- # continue 00:05:42.941 00:09:57 -- common/autotest_common.sh@1534 -- # for bdf in "${bdfs[@]}" 00:05:42.941 00:09:57 -- common/autotest_common.sh@1535 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:05:42.941 00:09:57 -- common/autotest_common.sh@1498 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:05:42.941 00:09:57 -- common/autotest_common.sh@1498 -- # grep 0000:00:11.0/nvme/nvme 00:05:42.941 00:09:57 -- common/autotest_common.sh@1498 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:05:42.941 00:09:57 -- common/autotest_common.sh@1499 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:05:42.941 00:09:57 -- common/autotest_common.sh@1503 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:05:42.941 00:09:57 -- common/autotest_common.sh@1503 -- # printf '%s\n' nvme0 00:05:42.941 00:09:57 -- common/autotest_common.sh@1535 -- # nvme_ctrlr=/dev/nvme0 00:05:42.941 00:09:57 -- common/autotest_common.sh@1536 -- # [[ -z /dev/nvme0 ]] 00:05:42.941 00:09:57 -- common/autotest_common.sh@1541 -- # nvme id-ctrl /dev/nvme0 00:05:42.941 00:09:57 -- common/autotest_common.sh@1541 -- # grep oacs 00:05:42.941 00:09:57 -- common/autotest_common.sh@1541 -- # cut -d: -f2 00:05:42.941 00:09:57 -- common/autotest_common.sh@1541 -- # oacs=' 0x12a' 00:05:42.941 00:09:57 -- common/autotest_common.sh@1542 -- # oacs_ns_manage=8 00:05:42.941 00:09:57 -- common/autotest_common.sh@1544 -- # [[ 8 -ne 0 ]] 00:05:42.941 00:09:57 -- common/autotest_common.sh@1550 -- # nvme id-ctrl /dev/nvme0 00:05:42.941 00:09:57 -- common/autotest_common.sh@1550 -- # grep unvmcap 00:05:42.941 00:09:57 -- common/autotest_common.sh@1550 -- # cut -d: -f2 00:05:42.941 00:09:57 -- common/autotest_common.sh@1550 -- # unvmcap=' 0' 00:05:42.941 00:09:57 -- common/autotest_common.sh@1551 -- # [[ 0 -eq 0 ]] 00:05:42.941 00:09:57 -- common/autotest_common.sh@1553 -- # continue 00:05:42.941 00:09:57 -- common/autotest_common.sh@1534 -- # for bdf in "${bdfs[@]}" 00:05:42.941 00:09:57 -- common/autotest_common.sh@1535 -- # get_nvme_ctrlr_from_bdf 0000:00:12.0 00:05:42.941 00:09:57 -- common/autotest_common.sh@1498 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:05:42.941 00:09:57 -- common/autotest_common.sh@1498 -- # grep 0000:00:12.0/nvme/nvme 00:05:42.941 00:09:57 -- common/autotest_common.sh@1498 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:05:42.941 00:09:57 -- common/autotest_common.sh@1499 -- # [[ -z /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 ]] 00:05:42.941 00:09:57 -- common/autotest_common.sh@1503 -- # basename /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:05:42.941 00:09:57 -- common/autotest_common.sh@1503 -- # printf '%s\n' nvme2 00:05:42.941 00:09:57 -- common/autotest_common.sh@1535 -- # nvme_ctrlr=/dev/nvme2 00:05:42.941 00:09:57 -- common/autotest_common.sh@1536 -- # [[ -z /dev/nvme2 ]] 00:05:42.941 00:09:57 -- common/autotest_common.sh@1541 -- # nvme id-ctrl /dev/nvme2 00:05:42.941 00:09:57 -- common/autotest_common.sh@1541 -- # grep oacs 00:05:42.941 00:09:57 -- common/autotest_common.sh@1541 -- # cut -d: -f2 00:05:42.941 00:09:57 -- common/autotest_common.sh@1541 -- # oacs=' 0x12a' 00:05:42.941 00:09:57 -- common/autotest_common.sh@1542 -- # oacs_ns_manage=8 00:05:42.941 00:09:57 -- common/autotest_common.sh@1544 -- # [[ 8 -ne 0 ]] 00:05:42.941 00:09:57 -- common/autotest_common.sh@1550 -- # nvme id-ctrl /dev/nvme2 00:05:42.941 00:09:57 -- common/autotest_common.sh@1550 -- # grep unvmcap 00:05:42.941 00:09:57 -- common/autotest_common.sh@1550 -- # cut -d: -f2 00:05:42.941 00:09:57 -- common/autotest_common.sh@1550 -- # unvmcap=' 0' 00:05:42.941 00:09:57 -- common/autotest_common.sh@1551 -- # [[ 0 -eq 0 ]] 00:05:42.941 00:09:57 -- common/autotest_common.sh@1553 -- # continue 00:05:42.941 00:09:57 -- common/autotest_common.sh@1534 -- # for bdf in "${bdfs[@]}" 00:05:42.941 00:09:57 -- common/autotest_common.sh@1535 -- # get_nvme_ctrlr_from_bdf 0000:00:13.0 00:05:42.941 00:09:57 -- common/autotest_common.sh@1498 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:05:42.941 00:09:57 -- common/autotest_common.sh@1498 -- # grep 0000:00:13.0/nvme/nvme 00:05:42.941 00:09:57 -- common/autotest_common.sh@1498 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:05:42.941 00:09:57 -- common/autotest_common.sh@1499 -- # [[ -z /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 ]] 00:05:42.941 00:09:57 -- common/autotest_common.sh@1503 -- # basename /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:05:42.941 00:09:57 -- common/autotest_common.sh@1503 -- # printf '%s\n' nvme3 00:05:42.941 00:09:57 -- common/autotest_common.sh@1535 -- # nvme_ctrlr=/dev/nvme3 00:05:42.941 00:09:57 -- common/autotest_common.sh@1536 -- # [[ -z /dev/nvme3 ]] 00:05:42.941 00:09:57 -- common/autotest_common.sh@1541 -- # nvme id-ctrl /dev/nvme3 00:05:42.941 00:09:57 -- common/autotest_common.sh@1541 -- # grep oacs 00:05:42.941 00:09:57 -- common/autotest_common.sh@1541 -- # cut -d: -f2 00:05:42.941 00:09:57 -- common/autotest_common.sh@1541 -- # oacs=' 0x12a' 00:05:42.941 00:09:57 -- common/autotest_common.sh@1542 -- # oacs_ns_manage=8 00:05:42.941 00:09:57 -- common/autotest_common.sh@1544 -- # [[ 8 -ne 0 ]] 00:05:42.941 00:09:57 -- common/autotest_common.sh@1550 -- # nvme id-ctrl /dev/nvme3 00:05:42.941 00:09:57 -- common/autotest_common.sh@1550 -- # grep unvmcap 00:05:42.941 00:09:57 -- common/autotest_common.sh@1550 -- # cut -d: -f2 00:05:42.941 00:09:57 -- common/autotest_common.sh@1550 -- # unvmcap=' 0' 00:05:42.941 00:09:57 -- common/autotest_common.sh@1551 -- # [[ 0 -eq 0 ]] 00:05:42.941 00:09:57 -- common/autotest_common.sh@1553 -- # continue 00:05:42.941 00:09:57 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:05:42.941 00:09:57 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:42.941 00:09:57 -- common/autotest_common.sh@10 -- # set +x 00:05:42.941 00:09:57 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:05:42.941 00:09:57 -- common/autotest_common.sh@720 -- # xtrace_disable 00:05:42.941 00:09:57 -- common/autotest_common.sh@10 -- # set +x 00:05:42.941 00:09:57 -- spdk/autotest.sh@139 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:43.510 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:44.448 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:05:44.448 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:05:44.448 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:44.448 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:05:44.448 00:09:59 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:05:44.448 00:09:59 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:44.448 00:09:59 -- common/autotest_common.sh@10 -- # set +x 00:05:44.448 00:09:59 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:05:44.448 00:09:59 -- common/autotest_common.sh@1587 -- # mapfile -t bdfs 00:05:44.448 00:09:59 -- common/autotest_common.sh@1587 -- # get_nvme_bdfs_by_id 0x0a54 00:05:44.448 00:09:59 -- common/autotest_common.sh@1573 -- # bdfs=() 00:05:44.448 00:09:59 -- common/autotest_common.sh@1573 -- # local bdfs 00:05:44.707 00:09:59 -- common/autotest_common.sh@1575 -- # get_nvme_bdfs 00:05:44.707 00:09:59 -- common/autotest_common.sh@1509 -- # bdfs=() 00:05:44.707 00:09:59 -- common/autotest_common.sh@1509 -- # local bdfs 00:05:44.707 00:09:59 -- common/autotest_common.sh@1510 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:44.707 00:09:59 -- common/autotest_common.sh@1510 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:44.707 00:09:59 -- common/autotest_common.sh@1510 -- # jq -r '.config[].params.traddr' 00:05:44.707 00:09:59 -- common/autotest_common.sh@1511 -- # (( 4 == 0 )) 00:05:44.707 00:09:59 -- common/autotest_common.sh@1515 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:05:44.707 00:09:59 -- common/autotest_common.sh@1575 -- # for bdf in $(get_nvme_bdfs) 00:05:44.707 00:09:59 -- common/autotest_common.sh@1576 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:05:44.707 00:09:59 -- common/autotest_common.sh@1576 -- # device=0x0010 00:05:44.707 00:09:59 -- common/autotest_common.sh@1577 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:44.707 00:09:59 -- common/autotest_common.sh@1575 -- # for bdf in $(get_nvme_bdfs) 00:05:44.707 00:09:59 -- common/autotest_common.sh@1576 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:05:44.707 00:09:59 -- common/autotest_common.sh@1576 -- # device=0x0010 00:05:44.707 00:09:59 -- common/autotest_common.sh@1577 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:44.707 00:09:59 -- common/autotest_common.sh@1575 -- # for bdf in $(get_nvme_bdfs) 00:05:44.707 00:09:59 -- common/autotest_common.sh@1576 -- # cat /sys/bus/pci/devices/0000:00:12.0/device 00:05:44.707 00:09:59 -- common/autotest_common.sh@1576 -- # device=0x0010 00:05:44.707 00:09:59 -- common/autotest_common.sh@1577 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:44.707 00:09:59 -- common/autotest_common.sh@1575 -- # for bdf in $(get_nvme_bdfs) 00:05:44.707 00:09:59 -- common/autotest_common.sh@1576 -- # cat /sys/bus/pci/devices/0000:00:13.0/device 00:05:44.707 00:09:59 -- common/autotest_common.sh@1576 -- # device=0x0010 00:05:44.707 00:09:59 -- common/autotest_common.sh@1577 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:44.707 00:09:59 -- common/autotest_common.sh@1582 -- # printf '%s\n' 00:05:44.707 00:09:59 -- common/autotest_common.sh@1588 -- # [[ -z '' ]] 00:05:44.707 00:09:59 -- common/autotest_common.sh@1589 -- # return 0 00:05:44.707 00:09:59 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:05:44.707 00:09:59 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:05:44.707 00:09:59 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:05:44.707 00:09:59 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:05:44.707 00:09:59 -- spdk/autotest.sh@162 -- # timing_enter lib 00:05:44.707 00:09:59 -- common/autotest_common.sh@720 -- # xtrace_disable 00:05:44.707 00:09:59 -- common/autotest_common.sh@10 -- # set +x 00:05:44.707 00:09:59 -- spdk/autotest.sh@164 -- # [[ 0 -eq 1 ]] 00:05:44.707 00:09:59 -- spdk/autotest.sh@168 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:44.707 00:09:59 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:44.707 00:09:59 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:44.707 00:09:59 -- common/autotest_common.sh@10 -- # set +x 00:05:44.707 ************************************ 00:05:44.707 START TEST env 00:05:44.707 ************************************ 00:05:44.707 00:09:59 env -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:44.966 * Looking for test storage... 00:05:44.966 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:05:44.966 00:09:59 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:05:44.966 00:09:59 env -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:44.966 00:09:59 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:44.966 00:09:59 env -- common/autotest_common.sh@10 -- # set +x 00:05:44.966 ************************************ 00:05:44.966 START TEST env_memory 00:05:44.966 ************************************ 00:05:44.966 00:09:59 env.env_memory -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:05:44.966 00:05:44.966 00:05:44.966 CUnit - A unit testing framework for C - Version 2.1-3 00:05:44.966 http://cunit.sourceforge.net/ 00:05:44.966 00:05:44.966 00:05:44.966 Suite: memory 00:05:44.966 Test: alloc and free memory map ...[2024-07-23 00:09:59.511964] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:44.966 passed 00:05:44.966 Test: mem map translation ...[2024-07-23 00:09:59.553226] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:44.966 [2024-07-23 00:09:59.553418] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:44.966 [2024-07-23 00:09:59.553603] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:44.966 [2024-07-23 00:09:59.553684] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:44.966 passed 00:05:44.966 Test: mem map registration ...[2024-07-23 00:09:59.615880] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:05:44.966 [2024-07-23 00:09:59.616047] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:05:44.966 passed 00:05:45.225 Test: mem map adjacent registrations ...passed 00:05:45.225 00:05:45.225 Run Summary: Type Total Ran Passed Failed Inactive 00:05:45.225 suites 1 1 n/a 0 0 00:05:45.225 tests 4 4 4 0 0 00:05:45.225 asserts 152 152 152 0 n/a 00:05:45.225 00:05:45.225 Elapsed time = 0.224 seconds 00:05:45.225 00:05:45.225 real 0m0.280s 00:05:45.225 user 0m0.238s 00:05:45.225 sys 0m0.030s 00:05:45.225 00:09:59 env.env_memory -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:45.225 00:09:59 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:05:45.225 ************************************ 00:05:45.225 END TEST env_memory 00:05:45.225 ************************************ 00:05:45.225 00:09:59 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:05:45.225 00:09:59 env -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:45.225 00:09:59 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:45.225 00:09:59 env -- common/autotest_common.sh@10 -- # set +x 00:05:45.225 ************************************ 00:05:45.225 START TEST env_vtophys 00:05:45.225 ************************************ 00:05:45.225 00:09:59 env.env_vtophys -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:05:45.225 EAL: lib.eal log level changed from notice to debug 00:05:45.225 EAL: Detected lcore 0 as core 0 on socket 0 00:05:45.225 EAL: Detected lcore 1 as core 0 on socket 0 00:05:45.225 EAL: Detected lcore 2 as core 0 on socket 0 00:05:45.225 EAL: Detected lcore 3 as core 0 on socket 0 00:05:45.226 EAL: Detected lcore 4 as core 0 on socket 0 00:05:45.226 EAL: Detected lcore 5 as core 0 on socket 0 00:05:45.226 EAL: Detected lcore 6 as core 0 on socket 0 00:05:45.226 EAL: Detected lcore 7 as core 0 on socket 0 00:05:45.226 EAL: Detected lcore 8 as core 0 on socket 0 00:05:45.226 EAL: Detected lcore 9 as core 0 on socket 0 00:05:45.226 EAL: Maximum logical cores by configuration: 128 00:05:45.226 EAL: Detected CPU lcores: 10 00:05:45.226 EAL: Detected NUMA nodes: 1 00:05:45.226 EAL: Checking presence of .so 'librte_eal.so.23.0' 00:05:45.226 EAL: Detected shared linkage of DPDK 00:05:45.226 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so.23.0 00:05:45.226 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so.23.0 00:05:45.226 EAL: Registered [vdev] bus. 00:05:45.226 EAL: bus.vdev log level changed from disabled to notice 00:05:45.226 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so.23.0 00:05:45.226 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so.23.0 00:05:45.226 EAL: pmd.net.i40e.init log level changed from disabled to notice 00:05:45.226 EAL: pmd.net.i40e.driver log level changed from disabled to notice 00:05:45.226 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so 00:05:45.226 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so 00:05:45.226 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so 00:05:45.226 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so 00:05:45.226 EAL: No shared files mode enabled, IPC will be disabled 00:05:45.226 EAL: No shared files mode enabled, IPC is disabled 00:05:45.226 EAL: Selected IOVA mode 'PA' 00:05:45.226 EAL: Probing VFIO support... 00:05:45.226 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:05:45.226 EAL: VFIO modules not loaded, skipping VFIO support... 00:05:45.226 EAL: Ask a virtual area of 0x2e000 bytes 00:05:45.226 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:45.226 EAL: Setting up physically contiguous memory... 00:05:45.226 EAL: Setting maximum number of open files to 524288 00:05:45.226 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:45.226 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:45.226 EAL: Ask a virtual area of 0x61000 bytes 00:05:45.226 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:45.226 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:45.226 EAL: Ask a virtual area of 0x400000000 bytes 00:05:45.226 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:45.226 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:45.226 EAL: Ask a virtual area of 0x61000 bytes 00:05:45.226 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:45.226 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:45.226 EAL: Ask a virtual area of 0x400000000 bytes 00:05:45.226 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:45.226 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:45.226 EAL: Ask a virtual area of 0x61000 bytes 00:05:45.226 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:45.226 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:45.226 EAL: Ask a virtual area of 0x400000000 bytes 00:05:45.226 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:45.226 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:45.226 EAL: Ask a virtual area of 0x61000 bytes 00:05:45.226 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:45.226 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:45.226 EAL: Ask a virtual area of 0x400000000 bytes 00:05:45.226 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:45.226 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:45.226 EAL: Hugepages will be freed exactly as allocated. 00:05:45.226 EAL: No shared files mode enabled, IPC is disabled 00:05:45.226 EAL: No shared files mode enabled, IPC is disabled 00:05:45.485 EAL: TSC frequency is ~2490000 KHz 00:05:45.485 EAL: Main lcore 0 is ready (tid=7ff6c7056a40;cpuset=[0]) 00:05:45.485 EAL: Trying to obtain current memory policy. 00:05:45.485 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:45.485 EAL: Restoring previous memory policy: 0 00:05:45.485 EAL: request: mp_malloc_sync 00:05:45.485 EAL: No shared files mode enabled, IPC is disabled 00:05:45.485 EAL: Heap on socket 0 was expanded by 2MB 00:05:45.485 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:05:45.485 EAL: No shared files mode enabled, IPC is disabled 00:05:45.485 EAL: No PCI address specified using 'addr=' in: bus=pci 00:05:45.485 EAL: Mem event callback 'spdk:(nil)' registered 00:05:45.485 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:05:45.485 00:05:45.485 00:05:45.485 CUnit - A unit testing framework for C - Version 2.1-3 00:05:45.485 http://cunit.sourceforge.net/ 00:05:45.485 00:05:45.485 00:05:45.485 Suite: components_suite 00:05:45.743 Test: vtophys_malloc_test ...passed 00:05:45.743 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:45.743 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:45.743 EAL: Restoring previous memory policy: 4 00:05:45.743 EAL: Calling mem event callback 'spdk:(nil)' 00:05:45.743 EAL: request: mp_malloc_sync 00:05:45.743 EAL: No shared files mode enabled, IPC is disabled 00:05:45.743 EAL: Heap on socket 0 was expanded by 4MB 00:05:45.743 EAL: Calling mem event callback 'spdk:(nil)' 00:05:45.743 EAL: request: mp_malloc_sync 00:05:45.743 EAL: No shared files mode enabled, IPC is disabled 00:05:45.743 EAL: Heap on socket 0 was shrunk by 4MB 00:05:45.743 EAL: Trying to obtain current memory policy. 00:05:45.743 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:45.743 EAL: Restoring previous memory policy: 4 00:05:45.743 EAL: Calling mem event callback 'spdk:(nil)' 00:05:45.743 EAL: request: mp_malloc_sync 00:05:45.743 EAL: No shared files mode enabled, IPC is disabled 00:05:45.743 EAL: Heap on socket 0 was expanded by 6MB 00:05:45.743 EAL: Calling mem event callback 'spdk:(nil)' 00:05:45.743 EAL: request: mp_malloc_sync 00:05:45.743 EAL: No shared files mode enabled, IPC is disabled 00:05:45.743 EAL: Heap on socket 0 was shrunk by 6MB 00:05:45.743 EAL: Trying to obtain current memory policy. 00:05:45.743 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:45.743 EAL: Restoring previous memory policy: 4 00:05:45.743 EAL: Calling mem event callback 'spdk:(nil)' 00:05:45.743 EAL: request: mp_malloc_sync 00:05:45.743 EAL: No shared files mode enabled, IPC is disabled 00:05:45.743 EAL: Heap on socket 0 was expanded by 10MB 00:05:45.743 EAL: Calling mem event callback 'spdk:(nil)' 00:05:45.743 EAL: request: mp_malloc_sync 00:05:45.743 EAL: No shared files mode enabled, IPC is disabled 00:05:45.743 EAL: Heap on socket 0 was shrunk by 10MB 00:05:45.743 EAL: Trying to obtain current memory policy. 00:05:45.743 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:45.743 EAL: Restoring previous memory policy: 4 00:05:45.744 EAL: Calling mem event callback 'spdk:(nil)' 00:05:45.744 EAL: request: mp_malloc_sync 00:05:45.744 EAL: No shared files mode enabled, IPC is disabled 00:05:45.744 EAL: Heap on socket 0 was expanded by 18MB 00:05:45.744 EAL: Calling mem event callback 'spdk:(nil)' 00:05:45.744 EAL: request: mp_malloc_sync 00:05:45.744 EAL: No shared files mode enabled, IPC is disabled 00:05:45.744 EAL: Heap on socket 0 was shrunk by 18MB 00:05:45.744 EAL: Trying to obtain current memory policy. 00:05:45.744 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:45.744 EAL: Restoring previous memory policy: 4 00:05:45.744 EAL: Calling mem event callback 'spdk:(nil)' 00:05:45.744 EAL: request: mp_malloc_sync 00:05:45.744 EAL: No shared files mode enabled, IPC is disabled 00:05:45.744 EAL: Heap on socket 0 was expanded by 34MB 00:05:45.744 EAL: Calling mem event callback 'spdk:(nil)' 00:05:45.744 EAL: request: mp_malloc_sync 00:05:45.744 EAL: No shared files mode enabled, IPC is disabled 00:05:45.744 EAL: Heap on socket 0 was shrunk by 34MB 00:05:45.744 EAL: Trying to obtain current memory policy. 00:05:45.744 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:46.003 EAL: Restoring previous memory policy: 4 00:05:46.003 EAL: Calling mem event callback 'spdk:(nil)' 00:05:46.003 EAL: request: mp_malloc_sync 00:05:46.003 EAL: No shared files mode enabled, IPC is disabled 00:05:46.003 EAL: Heap on socket 0 was expanded by 66MB 00:05:46.003 EAL: Calling mem event callback 'spdk:(nil)' 00:05:46.003 EAL: request: mp_malloc_sync 00:05:46.003 EAL: No shared files mode enabled, IPC is disabled 00:05:46.003 EAL: Heap on socket 0 was shrunk by 66MB 00:05:46.003 EAL: Trying to obtain current memory policy. 00:05:46.003 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:46.003 EAL: Restoring previous memory policy: 4 00:05:46.003 EAL: Calling mem event callback 'spdk:(nil)' 00:05:46.003 EAL: request: mp_malloc_sync 00:05:46.003 EAL: No shared files mode enabled, IPC is disabled 00:05:46.003 EAL: Heap on socket 0 was expanded by 130MB 00:05:46.003 EAL: Calling mem event callback 'spdk:(nil)' 00:05:46.003 EAL: request: mp_malloc_sync 00:05:46.003 EAL: No shared files mode enabled, IPC is disabled 00:05:46.003 EAL: Heap on socket 0 was shrunk by 130MB 00:05:46.003 EAL: Trying to obtain current memory policy. 00:05:46.004 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:46.004 EAL: Restoring previous memory policy: 4 00:05:46.004 EAL: Calling mem event callback 'spdk:(nil)' 00:05:46.004 EAL: request: mp_malloc_sync 00:05:46.004 EAL: No shared files mode enabled, IPC is disabled 00:05:46.004 EAL: Heap on socket 0 was expanded by 258MB 00:05:46.004 EAL: Calling mem event callback 'spdk:(nil)' 00:05:46.004 EAL: request: mp_malloc_sync 00:05:46.004 EAL: No shared files mode enabled, IPC is disabled 00:05:46.004 EAL: Heap on socket 0 was shrunk by 258MB 00:05:46.004 EAL: Trying to obtain current memory policy. 00:05:46.004 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:46.263 EAL: Restoring previous memory policy: 4 00:05:46.263 EAL: Calling mem event callback 'spdk:(nil)' 00:05:46.263 EAL: request: mp_malloc_sync 00:05:46.263 EAL: No shared files mode enabled, IPC is disabled 00:05:46.263 EAL: Heap on socket 0 was expanded by 514MB 00:05:46.263 EAL: Calling mem event callback 'spdk:(nil)' 00:05:46.263 EAL: request: mp_malloc_sync 00:05:46.263 EAL: No shared files mode enabled, IPC is disabled 00:05:46.263 EAL: Heap on socket 0 was shrunk by 514MB 00:05:46.263 EAL: Trying to obtain current memory policy. 00:05:46.263 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:46.523 EAL: Restoring previous memory policy: 4 00:05:46.523 EAL: Calling mem event callback 'spdk:(nil)' 00:05:46.523 EAL: request: mp_malloc_sync 00:05:46.523 EAL: No shared files mode enabled, IPC is disabled 00:05:46.523 EAL: Heap on socket 0 was expanded by 1026MB 00:05:46.782 EAL: Calling mem event callback 'spdk:(nil)' 00:05:47.041 passed 00:05:47.041 00:05:47.041 Run Summary: Type Total Ran Passed Failed Inactive 00:05:47.041 suites 1 1 n/a 0 0 00:05:47.041 tests 2 2 2 0 0 00:05:47.041 asserts 5386 5386 5386 0 n/a 00:05:47.041 00:05:47.041 Elapsed time = 1.455 seconds 00:05:47.041 EAL: request: mp_malloc_sync 00:05:47.041 EAL: No shared files mode enabled, IPC is disabled 00:05:47.041 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:47.041 EAL: Calling mem event callback 'spdk:(nil)' 00:05:47.041 EAL: request: mp_malloc_sync 00:05:47.041 EAL: No shared files mode enabled, IPC is disabled 00:05:47.041 EAL: Heap on socket 0 was shrunk by 2MB 00:05:47.041 EAL: No shared files mode enabled, IPC is disabled 00:05:47.041 EAL: No shared files mode enabled, IPC is disabled 00:05:47.041 EAL: No shared files mode enabled, IPC is disabled 00:05:47.041 00:05:47.041 real 0m1.709s 00:05:47.041 user 0m0.851s 00:05:47.041 sys 0m0.724s 00:05:47.041 00:10:01 env.env_vtophys -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:47.041 00:10:01 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:05:47.041 ************************************ 00:05:47.041 END TEST env_vtophys 00:05:47.041 ************************************ 00:05:47.041 00:10:01 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:47.041 00:10:01 env -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:47.041 00:10:01 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:47.041 00:10:01 env -- common/autotest_common.sh@10 -- # set +x 00:05:47.041 ************************************ 00:05:47.041 START TEST env_pci 00:05:47.041 ************************************ 00:05:47.041 00:10:01 env.env_pci -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:47.041 00:05:47.041 00:05:47.041 CUnit - A unit testing framework for C - Version 2.1-3 00:05:47.041 http://cunit.sourceforge.net/ 00:05:47.041 00:05:47.041 00:05:47.041 Suite: pci 00:05:47.041 Test: pci_hook ...[2024-07-23 00:10:01.602371] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 73727 has claimed it 00:05:47.041 passed 00:05:47.041 00:05:47.041 Run Summary: Type Total Ran Passed Failed Inactive 00:05:47.041 suites 1 1 n/a 0 0 00:05:47.041 tests 1 1 1 0 0 00:05:47.041 asserts 25 25 25 0 n/a 00:05:47.041 00:05:47.041 Elapsed time = 0.010 seconds 00:05:47.041 EAL: Cannot find device (10000:00:01.0) 00:05:47.041 EAL: Failed to attach device on primary process 00:05:47.041 00:05:47.041 real 0m0.103s 00:05:47.041 user 0m0.036s 00:05:47.041 sys 0m0.065s 00:05:47.041 00:10:01 env.env_pci -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:47.041 ************************************ 00:05:47.041 END TEST env_pci 00:05:47.041 ************************************ 00:05:47.041 00:10:01 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:05:47.041 00:10:01 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:47.300 00:10:01 env -- env/env.sh@15 -- # uname 00:05:47.300 00:10:01 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:47.300 00:10:01 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:47.300 00:10:01 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:47.300 00:10:01 env -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:05:47.300 00:10:01 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:47.300 00:10:01 env -- common/autotest_common.sh@10 -- # set +x 00:05:47.300 ************************************ 00:05:47.300 START TEST env_dpdk_post_init 00:05:47.300 ************************************ 00:05:47.300 00:10:01 env.env_dpdk_post_init -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:47.300 EAL: Detected CPU lcores: 10 00:05:47.300 EAL: Detected NUMA nodes: 1 00:05:47.300 EAL: Detected shared linkage of DPDK 00:05:47.300 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:47.300 EAL: Selected IOVA mode 'PA' 00:05:47.300 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:47.300 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:05:47.300 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:05:47.300 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:12.0 (socket -1) 00:05:47.300 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:13.0 (socket -1) 00:05:47.300 Starting DPDK initialization... 00:05:47.300 Starting SPDK post initialization... 00:05:47.300 SPDK NVMe probe 00:05:47.300 Attaching to 0000:00:10.0 00:05:47.300 Attaching to 0000:00:11.0 00:05:47.300 Attaching to 0000:00:12.0 00:05:47.300 Attaching to 0000:00:13.0 00:05:47.300 Attached to 0000:00:10.0 00:05:47.300 Attached to 0000:00:11.0 00:05:47.300 Attached to 0000:00:13.0 00:05:47.300 Attached to 0000:00:12.0 00:05:47.300 Cleaning up... 00:05:47.560 00:05:47.560 real 0m0.247s 00:05:47.560 user 0m0.079s 00:05:47.560 sys 0m0.071s 00:05:47.560 00:10:01 env.env_dpdk_post_init -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:47.560 00:10:01 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:05:47.560 ************************************ 00:05:47.560 END TEST env_dpdk_post_init 00:05:47.560 ************************************ 00:05:47.560 00:10:02 env -- env/env.sh@26 -- # uname 00:05:47.560 00:10:02 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:47.560 00:10:02 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:47.560 00:10:02 env -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:47.560 00:10:02 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:47.560 00:10:02 env -- common/autotest_common.sh@10 -- # set +x 00:05:47.560 ************************************ 00:05:47.560 START TEST env_mem_callbacks 00:05:47.560 ************************************ 00:05:47.560 00:10:02 env.env_mem_callbacks -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:47.560 EAL: Detected CPU lcores: 10 00:05:47.560 EAL: Detected NUMA nodes: 1 00:05:47.560 EAL: Detected shared linkage of DPDK 00:05:47.560 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:47.560 EAL: Selected IOVA mode 'PA' 00:05:47.560 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:47.560 00:05:47.560 00:05:47.560 CUnit - A unit testing framework for C - Version 2.1-3 00:05:47.560 http://cunit.sourceforge.net/ 00:05:47.560 00:05:47.560 00:05:47.560 Suite: memory 00:05:47.560 Test: test ... 00:05:47.560 register 0x200000200000 2097152 00:05:47.560 malloc 3145728 00:05:47.560 register 0x200000400000 4194304 00:05:47.560 buf 0x200000500000 len 3145728 PASSED 00:05:47.560 malloc 64 00:05:47.560 buf 0x2000004fff40 len 64 PASSED 00:05:47.560 malloc 4194304 00:05:47.560 register 0x200000800000 6291456 00:05:47.560 buf 0x200000a00000 len 4194304 PASSED 00:05:47.560 free 0x200000500000 3145728 00:05:47.560 free 0x2000004fff40 64 00:05:47.560 unregister 0x200000400000 4194304 PASSED 00:05:47.560 free 0x200000a00000 4194304 00:05:47.560 unregister 0x200000800000 6291456 PASSED 00:05:47.560 malloc 8388608 00:05:47.560 register 0x200000400000 10485760 00:05:47.560 buf 0x200000600000 len 8388608 PASSED 00:05:47.560 free 0x200000600000 8388608 00:05:47.560 unregister 0x200000400000 10485760 PASSED 00:05:47.560 passed 00:05:47.560 00:05:47.560 Run Summary: Type Total Ran Passed Failed Inactive 00:05:47.560 suites 1 1 n/a 0 0 00:05:47.560 tests 1 1 1 0 0 00:05:47.560 asserts 15 15 15 0 n/a 00:05:47.560 00:05:47.560 Elapsed time = 0.011 seconds 00:05:47.820 00:05:47.820 real 0m0.192s 00:05:47.820 user 0m0.036s 00:05:47.820 sys 0m0.054s 00:05:47.820 00:10:02 env.env_mem_callbacks -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:47.820 ************************************ 00:05:47.820 END TEST env_mem_callbacks 00:05:47.820 ************************************ 00:05:47.820 00:10:02 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:05:47.820 00:05:47.820 real 0m3.012s 00:05:47.820 user 0m1.402s 00:05:47.820 sys 0m1.269s 00:05:47.820 00:10:02 env -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:47.820 ************************************ 00:05:47.820 END TEST env 00:05:47.820 00:10:02 env -- common/autotest_common.sh@10 -- # set +x 00:05:47.820 ************************************ 00:05:47.820 00:10:02 -- spdk/autotest.sh@169 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:47.820 00:10:02 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:47.820 00:10:02 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:47.820 00:10:02 -- common/autotest_common.sh@10 -- # set +x 00:05:47.820 ************************************ 00:05:47.820 START TEST rpc 00:05:47.820 ************************************ 00:05:47.820 00:10:02 rpc -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:47.820 * Looking for test storage... 00:05:48.079 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:05:48.079 00:10:02 rpc -- rpc/rpc.sh@65 -- # spdk_pid=73846 00:05:48.079 00:10:02 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:05:48.079 00:10:02 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:48.079 00:10:02 rpc -- rpc/rpc.sh@67 -- # waitforlisten 73846 00:05:48.079 00:10:02 rpc -- common/autotest_common.sh@827 -- # '[' -z 73846 ']' 00:05:48.079 00:10:02 rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:48.079 00:10:02 rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:48.079 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:48.079 00:10:02 rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:48.079 00:10:02 rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:48.079 00:10:02 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:48.079 [2024-07-23 00:10:02.610777] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:05:48.079 [2024-07-23 00:10:02.610926] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73846 ] 00:05:48.079 [2024-07-23 00:10:02.755183] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:48.339 [2024-07-23 00:10:02.797550] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:48.339 [2024-07-23 00:10:02.797620] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 73846' to capture a snapshot of events at runtime. 00:05:48.339 [2024-07-23 00:10:02.797634] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:48.339 [2024-07-23 00:10:02.797648] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:48.339 [2024-07-23 00:10:02.797663] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid73846 for offline analysis/debug. 00:05:48.339 [2024-07-23 00:10:02.797705] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:48.905 00:10:03 rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:48.905 00:10:03 rpc -- common/autotest_common.sh@860 -- # return 0 00:05:48.905 00:10:03 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:48.905 00:10:03 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:48.905 00:10:03 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:48.905 00:10:03 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:48.905 00:10:03 rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:48.905 00:10:03 rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:48.905 00:10:03 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:48.905 ************************************ 00:05:48.905 START TEST rpc_integrity 00:05:48.905 ************************************ 00:05:48.905 00:10:03 rpc.rpc_integrity -- common/autotest_common.sh@1121 -- # rpc_integrity 00:05:48.905 00:10:03 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:48.905 00:10:03 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:48.905 00:10:03 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:48.905 00:10:03 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:48.905 00:10:03 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:48.905 00:10:03 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:48.905 00:10:03 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:48.905 00:10:03 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:48.905 00:10:03 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:48.905 00:10:03 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:48.905 00:10:03 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:48.905 00:10:03 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:48.905 00:10:03 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:48.905 00:10:03 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:48.905 00:10:03 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:48.905 00:10:03 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:48.905 00:10:03 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:48.905 { 00:05:48.905 "name": "Malloc0", 00:05:48.905 "aliases": [ 00:05:48.905 "ce18c9b8-2cd4-438e-b080-1876716c1af2" 00:05:48.905 ], 00:05:48.905 "product_name": "Malloc disk", 00:05:48.905 "block_size": 512, 00:05:48.905 "num_blocks": 16384, 00:05:48.905 "uuid": "ce18c9b8-2cd4-438e-b080-1876716c1af2", 00:05:48.905 "assigned_rate_limits": { 00:05:48.905 "rw_ios_per_sec": 0, 00:05:48.905 "rw_mbytes_per_sec": 0, 00:05:48.905 "r_mbytes_per_sec": 0, 00:05:48.905 "w_mbytes_per_sec": 0 00:05:48.905 }, 00:05:48.905 "claimed": false, 00:05:48.905 "zoned": false, 00:05:48.905 "supported_io_types": { 00:05:48.905 "read": true, 00:05:48.905 "write": true, 00:05:48.905 "unmap": true, 00:05:48.905 "write_zeroes": true, 00:05:48.905 "flush": true, 00:05:48.905 "reset": true, 00:05:48.905 "compare": false, 00:05:48.905 "compare_and_write": false, 00:05:48.905 "abort": true, 00:05:48.905 "nvme_admin": false, 00:05:48.905 "nvme_io": false 00:05:48.905 }, 00:05:48.905 "memory_domains": [ 00:05:48.905 { 00:05:48.905 "dma_device_id": "system", 00:05:48.905 "dma_device_type": 1 00:05:48.905 }, 00:05:48.905 { 00:05:48.905 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:48.905 "dma_device_type": 2 00:05:48.905 } 00:05:48.905 ], 00:05:48.905 "driver_specific": {} 00:05:48.905 } 00:05:48.905 ]' 00:05:48.905 00:10:03 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:48.905 00:10:03 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:48.905 00:10:03 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:48.905 00:10:03 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:48.905 00:10:03 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:48.905 [2024-07-23 00:10:03.535340] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:48.905 [2024-07-23 00:10:03.535407] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:48.905 [2024-07-23 00:10:03.535429] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:05:48.905 [2024-07-23 00:10:03.535446] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:48.905 [2024-07-23 00:10:03.537882] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:48.905 [2024-07-23 00:10:03.537923] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:48.905 Passthru0 00:05:48.905 00:10:03 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:48.905 00:10:03 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:48.905 00:10:03 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:48.905 00:10:03 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:48.905 00:10:03 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:48.905 00:10:03 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:48.905 { 00:05:48.905 "name": "Malloc0", 00:05:48.905 "aliases": [ 00:05:48.905 "ce18c9b8-2cd4-438e-b080-1876716c1af2" 00:05:48.905 ], 00:05:48.905 "product_name": "Malloc disk", 00:05:48.905 "block_size": 512, 00:05:48.905 "num_blocks": 16384, 00:05:48.905 "uuid": "ce18c9b8-2cd4-438e-b080-1876716c1af2", 00:05:48.905 "assigned_rate_limits": { 00:05:48.905 "rw_ios_per_sec": 0, 00:05:48.905 "rw_mbytes_per_sec": 0, 00:05:48.905 "r_mbytes_per_sec": 0, 00:05:48.905 "w_mbytes_per_sec": 0 00:05:48.905 }, 00:05:48.905 "claimed": true, 00:05:48.905 "claim_type": "exclusive_write", 00:05:48.905 "zoned": false, 00:05:48.905 "supported_io_types": { 00:05:48.905 "read": true, 00:05:48.905 "write": true, 00:05:48.905 "unmap": true, 00:05:48.905 "write_zeroes": true, 00:05:48.905 "flush": true, 00:05:48.905 "reset": true, 00:05:48.905 "compare": false, 00:05:48.905 "compare_and_write": false, 00:05:48.905 "abort": true, 00:05:48.905 "nvme_admin": false, 00:05:48.905 "nvme_io": false 00:05:48.905 }, 00:05:48.905 "memory_domains": [ 00:05:48.905 { 00:05:48.905 "dma_device_id": "system", 00:05:48.905 "dma_device_type": 1 00:05:48.905 }, 00:05:48.905 { 00:05:48.905 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:48.905 "dma_device_type": 2 00:05:48.905 } 00:05:48.905 ], 00:05:48.905 "driver_specific": {} 00:05:48.905 }, 00:05:48.905 { 00:05:48.905 "name": "Passthru0", 00:05:48.905 "aliases": [ 00:05:48.905 "a12227aa-3e3e-5c47-a1ca-d83217a276e4" 00:05:48.905 ], 00:05:48.905 "product_name": "passthru", 00:05:48.905 "block_size": 512, 00:05:48.905 "num_blocks": 16384, 00:05:48.905 "uuid": "a12227aa-3e3e-5c47-a1ca-d83217a276e4", 00:05:48.905 "assigned_rate_limits": { 00:05:48.905 "rw_ios_per_sec": 0, 00:05:48.905 "rw_mbytes_per_sec": 0, 00:05:48.905 "r_mbytes_per_sec": 0, 00:05:48.905 "w_mbytes_per_sec": 0 00:05:48.905 }, 00:05:48.905 "claimed": false, 00:05:48.905 "zoned": false, 00:05:48.905 "supported_io_types": { 00:05:48.905 "read": true, 00:05:48.905 "write": true, 00:05:48.905 "unmap": true, 00:05:48.905 "write_zeroes": true, 00:05:48.905 "flush": true, 00:05:48.905 "reset": true, 00:05:48.905 "compare": false, 00:05:48.905 "compare_and_write": false, 00:05:48.905 "abort": true, 00:05:48.905 "nvme_admin": false, 00:05:48.905 "nvme_io": false 00:05:48.905 }, 00:05:48.905 "memory_domains": [ 00:05:48.905 { 00:05:48.905 "dma_device_id": "system", 00:05:48.905 "dma_device_type": 1 00:05:48.905 }, 00:05:48.905 { 00:05:48.905 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:48.905 "dma_device_type": 2 00:05:48.905 } 00:05:48.905 ], 00:05:48.905 "driver_specific": { 00:05:48.905 "passthru": { 00:05:48.905 "name": "Passthru0", 00:05:48.905 "base_bdev_name": "Malloc0" 00:05:48.905 } 00:05:48.905 } 00:05:48.905 } 00:05:48.905 ]' 00:05:48.905 00:10:03 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:49.164 00:10:03 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:49.164 00:10:03 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:49.164 00:10:03 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:49.164 00:10:03 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:49.164 00:10:03 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:49.164 00:10:03 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:49.164 00:10:03 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:49.164 00:10:03 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:49.164 00:10:03 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:49.164 00:10:03 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:49.164 00:10:03 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:49.164 00:10:03 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:49.164 00:10:03 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:49.164 00:10:03 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:49.164 00:10:03 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:49.164 00:10:03 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:49.164 00:05:49.164 real 0m0.294s 00:05:49.164 user 0m0.184s 00:05:49.164 sys 0m0.044s 00:05:49.164 ************************************ 00:05:49.164 END TEST rpc_integrity 00:05:49.164 ************************************ 00:05:49.164 00:10:03 rpc.rpc_integrity -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:49.164 00:10:03 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:49.164 00:10:03 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:49.164 00:10:03 rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:49.164 00:10:03 rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:49.164 00:10:03 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:49.164 ************************************ 00:05:49.164 START TEST rpc_plugins 00:05:49.164 ************************************ 00:05:49.164 00:10:03 rpc.rpc_plugins -- common/autotest_common.sh@1121 -- # rpc_plugins 00:05:49.164 00:10:03 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:49.164 00:10:03 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:49.164 00:10:03 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:49.164 00:10:03 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:49.164 00:10:03 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:49.164 00:10:03 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:49.164 00:10:03 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:49.164 00:10:03 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:49.164 00:10:03 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:49.164 00:10:03 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:49.165 { 00:05:49.165 "name": "Malloc1", 00:05:49.165 "aliases": [ 00:05:49.165 "8d15703f-78af-4402-bbeb-e35609532804" 00:05:49.165 ], 00:05:49.165 "product_name": "Malloc disk", 00:05:49.165 "block_size": 4096, 00:05:49.165 "num_blocks": 256, 00:05:49.165 "uuid": "8d15703f-78af-4402-bbeb-e35609532804", 00:05:49.165 "assigned_rate_limits": { 00:05:49.165 "rw_ios_per_sec": 0, 00:05:49.165 "rw_mbytes_per_sec": 0, 00:05:49.165 "r_mbytes_per_sec": 0, 00:05:49.165 "w_mbytes_per_sec": 0 00:05:49.165 }, 00:05:49.165 "claimed": false, 00:05:49.165 "zoned": false, 00:05:49.165 "supported_io_types": { 00:05:49.165 "read": true, 00:05:49.165 "write": true, 00:05:49.165 "unmap": true, 00:05:49.165 "write_zeroes": true, 00:05:49.165 "flush": true, 00:05:49.165 "reset": true, 00:05:49.165 "compare": false, 00:05:49.165 "compare_and_write": false, 00:05:49.165 "abort": true, 00:05:49.165 "nvme_admin": false, 00:05:49.165 "nvme_io": false 00:05:49.165 }, 00:05:49.165 "memory_domains": [ 00:05:49.165 { 00:05:49.165 "dma_device_id": "system", 00:05:49.165 "dma_device_type": 1 00:05:49.165 }, 00:05:49.165 { 00:05:49.165 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:49.165 "dma_device_type": 2 00:05:49.165 } 00:05:49.165 ], 00:05:49.165 "driver_specific": {} 00:05:49.165 } 00:05:49.165 ]' 00:05:49.165 00:10:03 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:05:49.165 00:10:03 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:49.165 00:10:03 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:49.165 00:10:03 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:49.165 00:10:03 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:49.423 00:10:03 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:49.423 00:10:03 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:49.423 00:10:03 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:49.423 00:10:03 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:49.423 00:10:03 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:49.423 00:10:03 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:49.423 00:10:03 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:05:49.423 00:10:03 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:49.423 00:05:49.423 real 0m0.143s 00:05:49.423 user 0m0.084s 00:05:49.423 sys 0m0.025s 00:05:49.423 00:10:03 rpc.rpc_plugins -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:49.423 00:10:03 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:49.423 ************************************ 00:05:49.423 END TEST rpc_plugins 00:05:49.423 ************************************ 00:05:49.423 00:10:03 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:49.423 00:10:03 rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:49.423 00:10:03 rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:49.423 00:10:03 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:49.423 ************************************ 00:05:49.423 START TEST rpc_trace_cmd_test 00:05:49.423 ************************************ 00:05:49.423 00:10:03 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1121 -- # rpc_trace_cmd_test 00:05:49.423 00:10:03 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:05:49.423 00:10:03 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:49.423 00:10:03 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:49.423 00:10:03 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:49.423 00:10:03 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:49.423 00:10:04 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:05:49.423 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid73846", 00:05:49.423 "tpoint_group_mask": "0x8", 00:05:49.423 "iscsi_conn": { 00:05:49.423 "mask": "0x2", 00:05:49.424 "tpoint_mask": "0x0" 00:05:49.424 }, 00:05:49.424 "scsi": { 00:05:49.424 "mask": "0x4", 00:05:49.424 "tpoint_mask": "0x0" 00:05:49.424 }, 00:05:49.424 "bdev": { 00:05:49.424 "mask": "0x8", 00:05:49.424 "tpoint_mask": "0xffffffffffffffff" 00:05:49.424 }, 00:05:49.424 "nvmf_rdma": { 00:05:49.424 "mask": "0x10", 00:05:49.424 "tpoint_mask": "0x0" 00:05:49.424 }, 00:05:49.424 "nvmf_tcp": { 00:05:49.424 "mask": "0x20", 00:05:49.424 "tpoint_mask": "0x0" 00:05:49.424 }, 00:05:49.424 "ftl": { 00:05:49.424 "mask": "0x40", 00:05:49.424 "tpoint_mask": "0x0" 00:05:49.424 }, 00:05:49.424 "blobfs": { 00:05:49.424 "mask": "0x80", 00:05:49.424 "tpoint_mask": "0x0" 00:05:49.424 }, 00:05:49.424 "dsa": { 00:05:49.424 "mask": "0x200", 00:05:49.424 "tpoint_mask": "0x0" 00:05:49.424 }, 00:05:49.424 "thread": { 00:05:49.424 "mask": "0x400", 00:05:49.424 "tpoint_mask": "0x0" 00:05:49.424 }, 00:05:49.424 "nvme_pcie": { 00:05:49.424 "mask": "0x800", 00:05:49.424 "tpoint_mask": "0x0" 00:05:49.424 }, 00:05:49.424 "iaa": { 00:05:49.424 "mask": "0x1000", 00:05:49.424 "tpoint_mask": "0x0" 00:05:49.424 }, 00:05:49.424 "nvme_tcp": { 00:05:49.424 "mask": "0x2000", 00:05:49.424 "tpoint_mask": "0x0" 00:05:49.424 }, 00:05:49.424 "bdev_nvme": { 00:05:49.424 "mask": "0x4000", 00:05:49.424 "tpoint_mask": "0x0" 00:05:49.424 }, 00:05:49.424 "sock": { 00:05:49.424 "mask": "0x8000", 00:05:49.424 "tpoint_mask": "0x0" 00:05:49.424 } 00:05:49.424 }' 00:05:49.424 00:10:04 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:05:49.424 00:10:04 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:05:49.424 00:10:04 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:49.424 00:10:04 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:49.424 00:10:04 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:49.683 00:10:04 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:49.683 00:10:04 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:49.683 00:10:04 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:49.683 00:10:04 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:49.683 00:10:04 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:49.683 00:05:49.683 real 0m0.233s 00:05:49.683 user 0m0.188s 00:05:49.683 sys 0m0.037s 00:05:49.683 00:10:04 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:49.683 00:10:04 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:49.683 ************************************ 00:05:49.683 END TEST rpc_trace_cmd_test 00:05:49.683 ************************************ 00:05:49.683 00:10:04 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:49.683 00:10:04 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:49.683 00:10:04 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:49.683 00:10:04 rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:49.683 00:10:04 rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:49.683 00:10:04 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:49.683 ************************************ 00:05:49.683 START TEST rpc_daemon_integrity 00:05:49.683 ************************************ 00:05:49.683 00:10:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1121 -- # rpc_integrity 00:05:49.683 00:10:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:49.683 00:10:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:49.683 00:10:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:49.683 00:10:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:49.683 00:10:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:49.683 00:10:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:49.683 00:10:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:49.683 00:10:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:49.683 00:10:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:49.683 00:10:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:49.683 00:10:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:49.683 00:10:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:49.683 00:10:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:49.683 00:10:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:49.683 00:10:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:49.943 00:10:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:49.943 00:10:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:49.943 { 00:05:49.943 "name": "Malloc2", 00:05:49.943 "aliases": [ 00:05:49.943 "bbe94fb1-3452-4e3f-9388-266d06c435fb" 00:05:49.943 ], 00:05:49.943 "product_name": "Malloc disk", 00:05:49.943 "block_size": 512, 00:05:49.943 "num_blocks": 16384, 00:05:49.943 "uuid": "bbe94fb1-3452-4e3f-9388-266d06c435fb", 00:05:49.943 "assigned_rate_limits": { 00:05:49.943 "rw_ios_per_sec": 0, 00:05:49.943 "rw_mbytes_per_sec": 0, 00:05:49.943 "r_mbytes_per_sec": 0, 00:05:49.943 "w_mbytes_per_sec": 0 00:05:49.943 }, 00:05:49.943 "claimed": false, 00:05:49.943 "zoned": false, 00:05:49.943 "supported_io_types": { 00:05:49.943 "read": true, 00:05:49.943 "write": true, 00:05:49.943 "unmap": true, 00:05:49.943 "write_zeroes": true, 00:05:49.943 "flush": true, 00:05:49.943 "reset": true, 00:05:49.943 "compare": false, 00:05:49.943 "compare_and_write": false, 00:05:49.943 "abort": true, 00:05:49.943 "nvme_admin": false, 00:05:49.943 "nvme_io": false 00:05:49.943 }, 00:05:49.943 "memory_domains": [ 00:05:49.943 { 00:05:49.943 "dma_device_id": "system", 00:05:49.943 "dma_device_type": 1 00:05:49.943 }, 00:05:49.943 { 00:05:49.943 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:49.943 "dma_device_type": 2 00:05:49.943 } 00:05:49.943 ], 00:05:49.943 "driver_specific": {} 00:05:49.943 } 00:05:49.943 ]' 00:05:49.943 00:10:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:49.943 00:10:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:49.943 00:10:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:49.943 00:10:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:49.943 00:10:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:49.943 [2024-07-23 00:10:04.422949] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:49.943 [2024-07-23 00:10:04.423018] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:49.943 [2024-07-23 00:10:04.423041] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:05:49.943 [2024-07-23 00:10:04.423056] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:49.943 [2024-07-23 00:10:04.425638] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:49.943 [2024-07-23 00:10:04.425682] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:49.943 Passthru0 00:05:49.943 00:10:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:49.943 00:10:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:49.943 00:10:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:49.943 00:10:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:49.943 00:10:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:49.943 00:10:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:49.943 { 00:05:49.943 "name": "Malloc2", 00:05:49.943 "aliases": [ 00:05:49.943 "bbe94fb1-3452-4e3f-9388-266d06c435fb" 00:05:49.943 ], 00:05:49.943 "product_name": "Malloc disk", 00:05:49.943 "block_size": 512, 00:05:49.943 "num_blocks": 16384, 00:05:49.943 "uuid": "bbe94fb1-3452-4e3f-9388-266d06c435fb", 00:05:49.943 "assigned_rate_limits": { 00:05:49.943 "rw_ios_per_sec": 0, 00:05:49.943 "rw_mbytes_per_sec": 0, 00:05:49.943 "r_mbytes_per_sec": 0, 00:05:49.943 "w_mbytes_per_sec": 0 00:05:49.943 }, 00:05:49.943 "claimed": true, 00:05:49.943 "claim_type": "exclusive_write", 00:05:49.943 "zoned": false, 00:05:49.943 "supported_io_types": { 00:05:49.943 "read": true, 00:05:49.943 "write": true, 00:05:49.943 "unmap": true, 00:05:49.943 "write_zeroes": true, 00:05:49.943 "flush": true, 00:05:49.943 "reset": true, 00:05:49.943 "compare": false, 00:05:49.943 "compare_and_write": false, 00:05:49.943 "abort": true, 00:05:49.943 "nvme_admin": false, 00:05:49.943 "nvme_io": false 00:05:49.943 }, 00:05:49.943 "memory_domains": [ 00:05:49.943 { 00:05:49.943 "dma_device_id": "system", 00:05:49.943 "dma_device_type": 1 00:05:49.943 }, 00:05:49.943 { 00:05:49.943 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:49.943 "dma_device_type": 2 00:05:49.943 } 00:05:49.943 ], 00:05:49.943 "driver_specific": {} 00:05:49.943 }, 00:05:49.943 { 00:05:49.943 "name": "Passthru0", 00:05:49.943 "aliases": [ 00:05:49.943 "a4d07f72-b43b-52b9-bb42-eabd2789cb8d" 00:05:49.943 ], 00:05:49.943 "product_name": "passthru", 00:05:49.943 "block_size": 512, 00:05:49.943 "num_blocks": 16384, 00:05:49.943 "uuid": "a4d07f72-b43b-52b9-bb42-eabd2789cb8d", 00:05:49.943 "assigned_rate_limits": { 00:05:49.943 "rw_ios_per_sec": 0, 00:05:49.943 "rw_mbytes_per_sec": 0, 00:05:49.943 "r_mbytes_per_sec": 0, 00:05:49.943 "w_mbytes_per_sec": 0 00:05:49.943 }, 00:05:49.943 "claimed": false, 00:05:49.943 "zoned": false, 00:05:49.943 "supported_io_types": { 00:05:49.943 "read": true, 00:05:49.943 "write": true, 00:05:49.943 "unmap": true, 00:05:49.943 "write_zeroes": true, 00:05:49.943 "flush": true, 00:05:49.943 "reset": true, 00:05:49.943 "compare": false, 00:05:49.943 "compare_and_write": false, 00:05:49.943 "abort": true, 00:05:49.943 "nvme_admin": false, 00:05:49.943 "nvme_io": false 00:05:49.943 }, 00:05:49.943 "memory_domains": [ 00:05:49.943 { 00:05:49.943 "dma_device_id": "system", 00:05:49.943 "dma_device_type": 1 00:05:49.943 }, 00:05:49.943 { 00:05:49.943 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:49.943 "dma_device_type": 2 00:05:49.943 } 00:05:49.943 ], 00:05:49.943 "driver_specific": { 00:05:49.943 "passthru": { 00:05:49.943 "name": "Passthru0", 00:05:49.943 "base_bdev_name": "Malloc2" 00:05:49.943 } 00:05:49.943 } 00:05:49.943 } 00:05:49.943 ]' 00:05:49.943 00:10:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:49.943 00:10:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:49.943 00:10:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:49.943 00:10:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:49.943 00:10:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:49.943 00:10:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:49.943 00:10:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:49.943 00:10:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:49.943 00:10:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:49.943 00:10:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:49.943 00:10:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:49.943 00:10:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:49.943 00:10:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:49.943 00:10:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:49.943 00:10:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:49.943 00:10:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:49.943 00:10:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:49.943 00:05:49.943 real 0m0.308s 00:05:49.943 user 0m0.190s 00:05:49.943 sys 0m0.054s 00:05:49.943 00:10:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:49.943 00:10:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:49.943 ************************************ 00:05:49.943 END TEST rpc_daemon_integrity 00:05:49.943 ************************************ 00:05:50.203 00:10:04 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:50.203 00:10:04 rpc -- rpc/rpc.sh@84 -- # killprocess 73846 00:05:50.203 00:10:04 rpc -- common/autotest_common.sh@946 -- # '[' -z 73846 ']' 00:05:50.203 00:10:04 rpc -- common/autotest_common.sh@950 -- # kill -0 73846 00:05:50.203 00:10:04 rpc -- common/autotest_common.sh@951 -- # uname 00:05:50.203 00:10:04 rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:50.203 00:10:04 rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 73846 00:05:50.203 00:10:04 rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:50.203 killing process with pid 73846 00:05:50.203 00:10:04 rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:50.203 00:10:04 rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 73846' 00:05:50.203 00:10:04 rpc -- common/autotest_common.sh@965 -- # kill 73846 00:05:50.203 00:10:04 rpc -- common/autotest_common.sh@970 -- # wait 73846 00:05:50.462 00:05:50.462 real 0m2.681s 00:05:50.462 user 0m3.208s 00:05:50.462 sys 0m0.831s 00:05:50.462 00:10:05 rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:50.462 00:10:05 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:50.462 ************************************ 00:05:50.462 END TEST rpc 00:05:50.462 ************************************ 00:05:50.462 00:10:05 -- spdk/autotest.sh@170 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:05:50.462 00:10:05 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:50.462 00:10:05 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:50.462 00:10:05 -- common/autotest_common.sh@10 -- # set +x 00:05:50.462 ************************************ 00:05:50.462 START TEST skip_rpc 00:05:50.462 ************************************ 00:05:50.462 00:10:05 skip_rpc -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:05:50.721 * Looking for test storage... 00:05:50.721 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:05:50.721 00:10:05 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:50.721 00:10:05 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:50.721 00:10:05 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:05:50.721 00:10:05 skip_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:50.721 00:10:05 skip_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:50.721 00:10:05 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:50.721 ************************************ 00:05:50.721 START TEST skip_rpc 00:05:50.721 ************************************ 00:05:50.721 00:10:05 skip_rpc.skip_rpc -- common/autotest_common.sh@1121 -- # test_skip_rpc 00:05:50.722 00:10:05 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=74045 00:05:50.722 00:10:05 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:05:50.722 00:10:05 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:50.722 00:10:05 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:05:50.722 [2024-07-23 00:10:05.384494] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:05:50.722 [2024-07-23 00:10:05.384624] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74045 ] 00:05:50.981 [2024-07-23 00:10:05.536385] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:50.981 [2024-07-23 00:10:05.578152] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:56.256 00:10:10 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:05:56.256 00:10:10 skip_rpc.skip_rpc -- common/autotest_common.sh@648 -- # local es=0 00:05:56.256 00:10:10 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd spdk_get_version 00:05:56.256 00:10:10 skip_rpc.skip_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:05:56.256 00:10:10 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:56.256 00:10:10 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:05:56.256 00:10:10 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:56.256 00:10:10 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # rpc_cmd spdk_get_version 00:05:56.256 00:10:10 skip_rpc.skip_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:56.256 00:10:10 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:56.256 00:10:10 skip_rpc.skip_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:05:56.256 00:10:10 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # es=1 00:05:56.256 00:10:10 skip_rpc.skip_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:56.256 00:10:10 skip_rpc.skip_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:56.256 00:10:10 skip_rpc.skip_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:56.256 00:10:10 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:05:56.256 00:10:10 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 74045 00:05:56.256 00:10:10 skip_rpc.skip_rpc -- common/autotest_common.sh@946 -- # '[' -z 74045 ']' 00:05:56.256 00:10:10 skip_rpc.skip_rpc -- common/autotest_common.sh@950 -- # kill -0 74045 00:05:56.256 00:10:10 skip_rpc.skip_rpc -- common/autotest_common.sh@951 -- # uname 00:05:56.256 00:10:10 skip_rpc.skip_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:56.256 00:10:10 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 74045 00:05:56.256 00:10:10 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:56.256 00:10:10 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:56.256 killing process with pid 74045 00:05:56.256 00:10:10 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 74045' 00:05:56.256 00:10:10 skip_rpc.skip_rpc -- common/autotest_common.sh@965 -- # kill 74045 00:05:56.256 00:10:10 skip_rpc.skip_rpc -- common/autotest_common.sh@970 -- # wait 74045 00:05:56.256 00:05:56.256 real 0m5.444s 00:05:56.256 user 0m5.036s 00:05:56.256 sys 0m0.326s 00:05:56.256 00:10:10 skip_rpc.skip_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:56.256 00:10:10 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:56.256 ************************************ 00:05:56.256 END TEST skip_rpc 00:05:56.257 ************************************ 00:05:56.257 00:10:10 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:05:56.257 00:10:10 skip_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:56.257 00:10:10 skip_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:56.257 00:10:10 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:56.257 ************************************ 00:05:56.257 START TEST skip_rpc_with_json 00:05:56.257 ************************************ 00:05:56.257 00:10:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1121 -- # test_skip_rpc_with_json 00:05:56.257 00:10:10 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:05:56.257 00:10:10 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=74127 00:05:56.257 00:10:10 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:56.257 00:10:10 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:56.257 00:10:10 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 74127 00:05:56.257 00:10:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@827 -- # '[' -z 74127 ']' 00:05:56.257 00:10:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:56.257 00:10:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:56.257 00:10:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:56.257 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:56.257 00:10:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:56.257 00:10:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:56.257 [2024-07-23 00:10:10.909651] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:05:56.257 [2024-07-23 00:10:10.910720] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74127 ] 00:05:56.561 [2024-07-23 00:10:11.084394] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:56.561 [2024-07-23 00:10:11.127220] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:57.129 00:10:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:57.129 00:10:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@860 -- # return 0 00:05:57.129 00:10:11 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:05:57.129 00:10:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:57.129 00:10:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:57.129 [2024-07-23 00:10:11.693714] nvmf_rpc.c:2558:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:05:57.129 request: 00:05:57.129 { 00:05:57.129 "trtype": "tcp", 00:05:57.129 "method": "nvmf_get_transports", 00:05:57.129 "req_id": 1 00:05:57.129 } 00:05:57.129 Got JSON-RPC error response 00:05:57.129 response: 00:05:57.129 { 00:05:57.129 "code": -19, 00:05:57.129 "message": "No such device" 00:05:57.129 } 00:05:57.129 00:10:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:05:57.129 00:10:11 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:05:57.129 00:10:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:57.129 00:10:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:57.129 [2024-07-23 00:10:11.705813] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:57.129 00:10:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:57.129 00:10:11 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:05:57.129 00:10:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:57.129 00:10:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:57.388 00:10:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:57.388 00:10:11 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:57.388 { 00:05:57.388 "subsystems": [ 00:05:57.388 { 00:05:57.388 "subsystem": "keyring", 00:05:57.388 "config": [] 00:05:57.388 }, 00:05:57.388 { 00:05:57.388 "subsystem": "iobuf", 00:05:57.388 "config": [ 00:05:57.388 { 00:05:57.388 "method": "iobuf_set_options", 00:05:57.388 "params": { 00:05:57.388 "small_pool_count": 8192, 00:05:57.388 "large_pool_count": 1024, 00:05:57.388 "small_bufsize": 8192, 00:05:57.388 "large_bufsize": 135168 00:05:57.388 } 00:05:57.388 } 00:05:57.388 ] 00:05:57.388 }, 00:05:57.388 { 00:05:57.388 "subsystem": "sock", 00:05:57.388 "config": [ 00:05:57.388 { 00:05:57.388 "method": "sock_set_default_impl", 00:05:57.388 "params": { 00:05:57.388 "impl_name": "posix" 00:05:57.388 } 00:05:57.388 }, 00:05:57.388 { 00:05:57.388 "method": "sock_impl_set_options", 00:05:57.388 "params": { 00:05:57.388 "impl_name": "ssl", 00:05:57.388 "recv_buf_size": 4096, 00:05:57.388 "send_buf_size": 4096, 00:05:57.388 "enable_recv_pipe": true, 00:05:57.388 "enable_quickack": false, 00:05:57.388 "enable_placement_id": 0, 00:05:57.388 "enable_zerocopy_send_server": true, 00:05:57.388 "enable_zerocopy_send_client": false, 00:05:57.388 "zerocopy_threshold": 0, 00:05:57.388 "tls_version": 0, 00:05:57.388 "enable_ktls": false 00:05:57.388 } 00:05:57.388 }, 00:05:57.388 { 00:05:57.388 "method": "sock_impl_set_options", 00:05:57.388 "params": { 00:05:57.388 "impl_name": "posix", 00:05:57.388 "recv_buf_size": 2097152, 00:05:57.388 "send_buf_size": 2097152, 00:05:57.388 "enable_recv_pipe": true, 00:05:57.388 "enable_quickack": false, 00:05:57.388 "enable_placement_id": 0, 00:05:57.388 "enable_zerocopy_send_server": true, 00:05:57.388 "enable_zerocopy_send_client": false, 00:05:57.388 "zerocopy_threshold": 0, 00:05:57.388 "tls_version": 0, 00:05:57.388 "enable_ktls": false 00:05:57.388 } 00:05:57.388 } 00:05:57.388 ] 00:05:57.388 }, 00:05:57.388 { 00:05:57.388 "subsystem": "vmd", 00:05:57.388 "config": [] 00:05:57.388 }, 00:05:57.388 { 00:05:57.388 "subsystem": "accel", 00:05:57.388 "config": [ 00:05:57.388 { 00:05:57.388 "method": "accel_set_options", 00:05:57.388 "params": { 00:05:57.388 "small_cache_size": 128, 00:05:57.388 "large_cache_size": 16, 00:05:57.388 "task_count": 2048, 00:05:57.388 "sequence_count": 2048, 00:05:57.388 "buf_count": 2048 00:05:57.388 } 00:05:57.388 } 00:05:57.388 ] 00:05:57.388 }, 00:05:57.388 { 00:05:57.388 "subsystem": "bdev", 00:05:57.388 "config": [ 00:05:57.388 { 00:05:57.388 "method": "bdev_set_options", 00:05:57.388 "params": { 00:05:57.388 "bdev_io_pool_size": 65535, 00:05:57.388 "bdev_io_cache_size": 256, 00:05:57.388 "bdev_auto_examine": true, 00:05:57.388 "iobuf_small_cache_size": 128, 00:05:57.388 "iobuf_large_cache_size": 16 00:05:57.388 } 00:05:57.388 }, 00:05:57.388 { 00:05:57.388 "method": "bdev_raid_set_options", 00:05:57.388 "params": { 00:05:57.388 "process_window_size_kb": 1024 00:05:57.388 } 00:05:57.388 }, 00:05:57.388 { 00:05:57.388 "method": "bdev_iscsi_set_options", 00:05:57.388 "params": { 00:05:57.388 "timeout_sec": 30 00:05:57.388 } 00:05:57.388 }, 00:05:57.388 { 00:05:57.388 "method": "bdev_nvme_set_options", 00:05:57.388 "params": { 00:05:57.388 "action_on_timeout": "none", 00:05:57.388 "timeout_us": 0, 00:05:57.388 "timeout_admin_us": 0, 00:05:57.388 "keep_alive_timeout_ms": 10000, 00:05:57.388 "arbitration_burst": 0, 00:05:57.388 "low_priority_weight": 0, 00:05:57.388 "medium_priority_weight": 0, 00:05:57.388 "high_priority_weight": 0, 00:05:57.388 "nvme_adminq_poll_period_us": 10000, 00:05:57.388 "nvme_ioq_poll_period_us": 0, 00:05:57.388 "io_queue_requests": 0, 00:05:57.388 "delay_cmd_submit": true, 00:05:57.388 "transport_retry_count": 4, 00:05:57.388 "bdev_retry_count": 3, 00:05:57.388 "transport_ack_timeout": 0, 00:05:57.388 "ctrlr_loss_timeout_sec": 0, 00:05:57.388 "reconnect_delay_sec": 0, 00:05:57.388 "fast_io_fail_timeout_sec": 0, 00:05:57.388 "disable_auto_failback": false, 00:05:57.388 "generate_uuids": false, 00:05:57.388 "transport_tos": 0, 00:05:57.388 "nvme_error_stat": false, 00:05:57.388 "rdma_srq_size": 0, 00:05:57.388 "io_path_stat": false, 00:05:57.388 "allow_accel_sequence": false, 00:05:57.388 "rdma_max_cq_size": 0, 00:05:57.388 "rdma_cm_event_timeout_ms": 0, 00:05:57.388 "dhchap_digests": [ 00:05:57.388 "sha256", 00:05:57.388 "sha384", 00:05:57.388 "sha512" 00:05:57.388 ], 00:05:57.388 "dhchap_dhgroups": [ 00:05:57.388 "null", 00:05:57.388 "ffdhe2048", 00:05:57.388 "ffdhe3072", 00:05:57.388 "ffdhe4096", 00:05:57.388 "ffdhe6144", 00:05:57.388 "ffdhe8192" 00:05:57.388 ] 00:05:57.388 } 00:05:57.388 }, 00:05:57.388 { 00:05:57.388 "method": "bdev_nvme_set_hotplug", 00:05:57.388 "params": { 00:05:57.388 "period_us": 100000, 00:05:57.388 "enable": false 00:05:57.388 } 00:05:57.388 }, 00:05:57.388 { 00:05:57.388 "method": "bdev_wait_for_examine" 00:05:57.388 } 00:05:57.388 ] 00:05:57.388 }, 00:05:57.388 { 00:05:57.388 "subsystem": "scsi", 00:05:57.388 "config": null 00:05:57.388 }, 00:05:57.388 { 00:05:57.388 "subsystem": "scheduler", 00:05:57.388 "config": [ 00:05:57.388 { 00:05:57.388 "method": "framework_set_scheduler", 00:05:57.388 "params": { 00:05:57.388 "name": "static" 00:05:57.388 } 00:05:57.388 } 00:05:57.388 ] 00:05:57.388 }, 00:05:57.388 { 00:05:57.388 "subsystem": "vhost_scsi", 00:05:57.388 "config": [] 00:05:57.388 }, 00:05:57.388 { 00:05:57.388 "subsystem": "vhost_blk", 00:05:57.388 "config": [] 00:05:57.388 }, 00:05:57.388 { 00:05:57.388 "subsystem": "ublk", 00:05:57.388 "config": [] 00:05:57.388 }, 00:05:57.388 { 00:05:57.388 "subsystem": "nbd", 00:05:57.388 "config": [] 00:05:57.388 }, 00:05:57.389 { 00:05:57.389 "subsystem": "nvmf", 00:05:57.389 "config": [ 00:05:57.389 { 00:05:57.389 "method": "nvmf_set_config", 00:05:57.389 "params": { 00:05:57.389 "discovery_filter": "match_any", 00:05:57.389 "admin_cmd_passthru": { 00:05:57.389 "identify_ctrlr": false 00:05:57.389 } 00:05:57.389 } 00:05:57.389 }, 00:05:57.389 { 00:05:57.389 "method": "nvmf_set_max_subsystems", 00:05:57.389 "params": { 00:05:57.389 "max_subsystems": 1024 00:05:57.389 } 00:05:57.389 }, 00:05:57.389 { 00:05:57.389 "method": "nvmf_set_crdt", 00:05:57.389 "params": { 00:05:57.389 "crdt1": 0, 00:05:57.389 "crdt2": 0, 00:05:57.389 "crdt3": 0 00:05:57.389 } 00:05:57.389 }, 00:05:57.389 { 00:05:57.389 "method": "nvmf_create_transport", 00:05:57.389 "params": { 00:05:57.389 "trtype": "TCP", 00:05:57.389 "max_queue_depth": 128, 00:05:57.389 "max_io_qpairs_per_ctrlr": 127, 00:05:57.389 "in_capsule_data_size": 4096, 00:05:57.389 "max_io_size": 131072, 00:05:57.389 "io_unit_size": 131072, 00:05:57.389 "max_aq_depth": 128, 00:05:57.389 "num_shared_buffers": 511, 00:05:57.389 "buf_cache_size": 4294967295, 00:05:57.389 "dif_insert_or_strip": false, 00:05:57.389 "zcopy": false, 00:05:57.389 "c2h_success": true, 00:05:57.389 "sock_priority": 0, 00:05:57.389 "abort_timeout_sec": 1, 00:05:57.389 "ack_timeout": 0, 00:05:57.389 "data_wr_pool_size": 0 00:05:57.389 } 00:05:57.389 } 00:05:57.389 ] 00:05:57.389 }, 00:05:57.389 { 00:05:57.389 "subsystem": "iscsi", 00:05:57.389 "config": [ 00:05:57.389 { 00:05:57.389 "method": "iscsi_set_options", 00:05:57.389 "params": { 00:05:57.389 "node_base": "iqn.2016-06.io.spdk", 00:05:57.389 "max_sessions": 128, 00:05:57.389 "max_connections_per_session": 2, 00:05:57.389 "max_queue_depth": 64, 00:05:57.389 "default_time2wait": 2, 00:05:57.389 "default_time2retain": 20, 00:05:57.389 "first_burst_length": 8192, 00:05:57.389 "immediate_data": true, 00:05:57.389 "allow_duplicated_isid": false, 00:05:57.389 "error_recovery_level": 0, 00:05:57.389 "nop_timeout": 60, 00:05:57.389 "nop_in_interval": 30, 00:05:57.389 "disable_chap": false, 00:05:57.389 "require_chap": false, 00:05:57.389 "mutual_chap": false, 00:05:57.389 "chap_group": 0, 00:05:57.389 "max_large_datain_per_connection": 64, 00:05:57.389 "max_r2t_per_connection": 4, 00:05:57.389 "pdu_pool_size": 36864, 00:05:57.389 "immediate_data_pool_size": 16384, 00:05:57.389 "data_out_pool_size": 2048 00:05:57.389 } 00:05:57.389 } 00:05:57.389 ] 00:05:57.389 } 00:05:57.389 ] 00:05:57.389 } 00:05:57.389 00:10:11 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:05:57.389 00:10:11 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 74127 00:05:57.389 00:10:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@946 -- # '[' -z 74127 ']' 00:05:57.389 00:10:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # kill -0 74127 00:05:57.389 00:10:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # uname 00:05:57.389 00:10:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:57.389 00:10:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 74127 00:05:57.389 killing process with pid 74127 00:05:57.389 00:10:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:57.389 00:10:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:57.389 00:10:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # echo 'killing process with pid 74127' 00:05:57.389 00:10:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@965 -- # kill 74127 00:05:57.389 00:10:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@970 -- # wait 74127 00:05:57.648 00:10:12 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=74156 00:05:57.648 00:10:12 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:57.648 00:10:12 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:06:02.915 00:10:17 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 74156 00:06:02.915 00:10:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@946 -- # '[' -z 74156 ']' 00:06:02.915 00:10:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # kill -0 74156 00:06:02.915 00:10:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # uname 00:06:02.915 00:10:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:02.915 00:10:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 74156 00:06:02.915 killing process with pid 74156 00:06:02.915 00:10:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:02.915 00:10:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:02.915 00:10:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # echo 'killing process with pid 74156' 00:06:02.915 00:10:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@965 -- # kill 74156 00:06:02.915 00:10:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@970 -- # wait 74156 00:06:03.174 00:10:17 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:06:03.174 00:10:17 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:06:03.174 00:06:03.174 real 0m6.939s 00:06:03.174 user 0m6.451s 00:06:03.174 sys 0m0.758s 00:06:03.174 00:10:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:03.174 ************************************ 00:06:03.174 END TEST skip_rpc_with_json 00:06:03.174 ************************************ 00:06:03.174 00:10:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:03.174 00:10:17 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:06:03.174 00:10:17 skip_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:03.174 00:10:17 skip_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:03.174 00:10:17 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:03.174 ************************************ 00:06:03.174 START TEST skip_rpc_with_delay 00:06:03.174 ************************************ 00:06:03.174 00:10:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1121 -- # test_skip_rpc_with_delay 00:06:03.174 00:10:17 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:03.174 00:10:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@648 -- # local es=0 00:06:03.174 00:10:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:03.174 00:10:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:03.174 00:10:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:03.174 00:10:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:03.174 00:10:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:03.174 00:10:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:03.174 00:10:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:03.174 00:10:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:03.174 00:10:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:06:03.174 00:10:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:03.433 [2024-07-23 00:10:17.916764] app.c: 832:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:06:03.433 [2024-07-23 00:10:17.916894] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:06:03.433 00:10:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # es=1 00:06:03.433 00:10:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:03.433 00:10:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:03.433 00:10:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:03.433 00:06:03.433 real 0m0.166s 00:06:03.433 user 0m0.081s 00:06:03.433 sys 0m0.084s 00:06:03.433 ************************************ 00:06:03.433 END TEST skip_rpc_with_delay 00:06:03.434 ************************************ 00:06:03.434 00:10:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:03.434 00:10:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:06:03.434 00:10:18 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:06:03.434 00:10:18 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:06:03.434 00:10:18 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:06:03.434 00:10:18 skip_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:03.434 00:10:18 skip_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:03.434 00:10:18 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:03.434 ************************************ 00:06:03.434 START TEST exit_on_failed_rpc_init 00:06:03.434 ************************************ 00:06:03.434 00:10:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1121 -- # test_exit_on_failed_rpc_init 00:06:03.434 00:10:18 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=74267 00:06:03.434 00:10:18 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:03.434 00:10:18 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 74267 00:06:03.434 00:10:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@827 -- # '[' -z 74267 ']' 00:06:03.434 00:10:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:03.434 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:03.434 00:10:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:03.434 00:10:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:03.434 00:10:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:03.434 00:10:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:03.693 [2024-07-23 00:10:18.162623] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:06:03.693 [2024-07-23 00:10:18.162767] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74267 ] 00:06:03.693 [2024-07-23 00:10:18.312687] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:03.693 [2024-07-23 00:10:18.358694] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:04.629 00:10:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:04.629 00:10:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@860 -- # return 0 00:06:04.629 00:10:18 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:04.629 00:10:18 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:06:04.629 00:10:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@648 -- # local es=0 00:06:04.629 00:10:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:06:04.629 00:10:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:04.629 00:10:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:04.629 00:10:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:04.629 00:10:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:04.629 00:10:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:04.629 00:10:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:04.629 00:10:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:04.629 00:10:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:06:04.629 00:10:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:06:04.629 [2024-07-23 00:10:19.049158] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:06:04.629 [2024-07-23 00:10:19.049299] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74285 ] 00:06:04.629 [2024-07-23 00:10:19.201700] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:04.629 [2024-07-23 00:10:19.245669] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:04.629 [2024-07-23 00:10:19.245767] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:06:04.629 [2024-07-23 00:10:19.245795] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:06:04.629 [2024-07-23 00:10:19.245814] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:04.888 00:10:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # es=234 00:06:04.888 00:10:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:04.888 00:10:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@660 -- # es=106 00:06:04.888 00:10:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # case "$es" in 00:06:04.888 00:10:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@668 -- # es=1 00:06:04.888 00:10:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:04.888 00:10:19 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:06:04.888 00:10:19 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 74267 00:06:04.888 00:10:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@946 -- # '[' -z 74267 ']' 00:06:04.888 00:10:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@950 -- # kill -0 74267 00:06:04.888 00:10:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@951 -- # uname 00:06:04.888 00:10:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:04.888 00:10:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 74267 00:06:04.888 killing process with pid 74267 00:06:04.888 00:10:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:04.888 00:10:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:04.888 00:10:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # echo 'killing process with pid 74267' 00:06:04.888 00:10:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@965 -- # kill 74267 00:06:04.888 00:10:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@970 -- # wait 74267 00:06:05.147 ************************************ 00:06:05.147 END TEST exit_on_failed_rpc_init 00:06:05.147 ************************************ 00:06:05.147 00:06:05.147 real 0m1.718s 00:06:05.147 user 0m1.774s 00:06:05.147 sys 0m0.545s 00:06:05.147 00:10:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:05.147 00:10:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:05.405 00:10:19 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:06:05.405 00:06:05.405 real 0m14.703s 00:06:05.405 user 0m13.482s 00:06:05.405 sys 0m1.998s 00:06:05.405 00:10:19 skip_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:05.405 ************************************ 00:06:05.405 END TEST skip_rpc 00:06:05.405 ************************************ 00:06:05.405 00:10:19 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:05.405 00:10:19 -- spdk/autotest.sh@171 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:06:05.405 00:10:19 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:05.405 00:10:19 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:05.405 00:10:19 -- common/autotest_common.sh@10 -- # set +x 00:06:05.405 ************************************ 00:06:05.405 START TEST rpc_client 00:06:05.405 ************************************ 00:06:05.405 00:10:19 rpc_client -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:06:05.405 * Looking for test storage... 00:06:05.405 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:06:05.405 00:10:20 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:06:05.405 OK 00:06:05.663 00:10:20 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:06:05.663 00:06:05.663 real 0m0.206s 00:06:05.663 user 0m0.087s 00:06:05.663 sys 0m0.131s 00:06:05.663 00:10:20 rpc_client -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:05.663 00:10:20 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:06:05.663 ************************************ 00:06:05.663 END TEST rpc_client 00:06:05.663 ************************************ 00:06:05.663 00:10:20 -- spdk/autotest.sh@172 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:06:05.663 00:10:20 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:05.663 00:10:20 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:05.663 00:10:20 -- common/autotest_common.sh@10 -- # set +x 00:06:05.663 ************************************ 00:06:05.663 START TEST json_config 00:06:05.663 ************************************ 00:06:05.663 00:10:20 json_config -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:06:05.663 00:10:20 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:05.663 00:10:20 json_config -- nvmf/common.sh@7 -- # uname -s 00:06:05.663 00:10:20 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:05.663 00:10:20 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:05.663 00:10:20 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:05.663 00:10:20 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:05.663 00:10:20 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:05.663 00:10:20 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:05.663 00:10:20 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:05.663 00:10:20 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:05.663 00:10:20 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:05.663 00:10:20 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:05.663 00:10:20 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:7893f947-64f4-4a28-aaff-df0e6993fd1b 00:06:05.663 00:10:20 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=7893f947-64f4-4a28-aaff-df0e6993fd1b 00:06:05.663 00:10:20 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:05.663 00:10:20 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:05.663 00:10:20 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:05.663 00:10:20 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:05.663 00:10:20 json_config -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:05.663 00:10:20 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:05.663 00:10:20 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:05.663 00:10:20 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:05.664 00:10:20 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:05.664 00:10:20 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:05.664 00:10:20 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:05.664 00:10:20 json_config -- paths/export.sh@5 -- # export PATH 00:06:05.664 00:10:20 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:05.664 00:10:20 json_config -- nvmf/common.sh@47 -- # : 0 00:06:05.664 00:10:20 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:05.664 00:10:20 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:05.664 00:10:20 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:05.664 00:10:20 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:05.664 00:10:20 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:05.664 00:10:20 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:05.664 00:10:20 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:05.664 00:10:20 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:05.664 00:10:20 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:06:05.664 00:10:20 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:06:05.664 00:10:20 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:06:05.664 00:10:20 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:06:05.664 00:10:20 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:06:05.664 WARNING: No tests are enabled so not running JSON configuration tests 00:06:05.664 00:10:20 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:06:05.664 00:10:20 json_config -- json_config/json_config.sh@28 -- # exit 0 00:06:05.664 00:06:05.664 real 0m0.117s 00:06:05.664 user 0m0.059s 00:06:05.664 sys 0m0.057s 00:06:05.664 00:10:20 json_config -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:05.664 ************************************ 00:06:05.664 00:10:20 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:05.664 END TEST json_config 00:06:05.664 ************************************ 00:06:05.923 00:10:20 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:06:05.923 00:10:20 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:05.923 00:10:20 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:05.923 00:10:20 -- common/autotest_common.sh@10 -- # set +x 00:06:05.923 ************************************ 00:06:05.923 START TEST json_config_extra_key 00:06:05.923 ************************************ 00:06:05.923 00:10:20 json_config_extra_key -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:06:05.923 00:10:20 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:05.923 00:10:20 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:06:05.923 00:10:20 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:05.923 00:10:20 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:05.923 00:10:20 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:05.923 00:10:20 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:05.923 00:10:20 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:05.923 00:10:20 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:05.923 00:10:20 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:05.923 00:10:20 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:05.923 00:10:20 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:05.923 00:10:20 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:05.923 00:10:20 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:7893f947-64f4-4a28-aaff-df0e6993fd1b 00:06:05.923 00:10:20 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=7893f947-64f4-4a28-aaff-df0e6993fd1b 00:06:05.923 00:10:20 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:05.923 00:10:20 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:05.923 00:10:20 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:05.923 00:10:20 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:05.923 00:10:20 json_config_extra_key -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:05.923 00:10:20 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:05.923 00:10:20 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:05.923 00:10:20 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:05.923 00:10:20 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:05.923 00:10:20 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:05.923 00:10:20 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:05.923 00:10:20 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:06:05.924 00:10:20 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:05.924 00:10:20 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:06:05.924 00:10:20 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:05.924 00:10:20 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:05.924 00:10:20 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:05.924 00:10:20 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:05.924 00:10:20 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:05.924 00:10:20 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:05.924 00:10:20 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:05.924 00:10:20 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:05.924 00:10:20 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:06:05.924 00:10:20 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:06:05.924 00:10:20 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:06:05.924 00:10:20 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:06:05.924 00:10:20 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:06:05.924 00:10:20 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:06:05.924 00:10:20 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:06:05.924 00:10:20 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:06:05.924 00:10:20 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:06:05.924 00:10:20 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:05.924 INFO: launching applications... 00:06:05.924 00:10:20 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:06:05.924 00:10:20 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:06:05.924 00:10:20 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:06:05.924 00:10:20 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:06:05.924 00:10:20 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:05.924 00:10:20 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:05.924 00:10:20 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:06:05.924 00:10:20 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:05.924 00:10:20 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:05.924 00:10:20 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=74438 00:06:05.924 Waiting for target to run... 00:06:05.924 00:10:20 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:05.924 00:10:20 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 74438 /var/tmp/spdk_tgt.sock 00:06:05.924 00:10:20 json_config_extra_key -- common/autotest_common.sh@827 -- # '[' -z 74438 ']' 00:06:05.924 00:10:20 json_config_extra_key -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:05.924 00:10:20 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:06:05.924 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:05.924 00:10:20 json_config_extra_key -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:05.924 00:10:20 json_config_extra_key -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:05.924 00:10:20 json_config_extra_key -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:05.924 00:10:20 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:05.924 [2024-07-23 00:10:20.580089] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:06:05.924 [2024-07-23 00:10:20.580218] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74438 ] 00:06:06.491 [2024-07-23 00:10:20.950444] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:06.491 [2024-07-23 00:10:20.983002] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:06.749 00:10:21 json_config_extra_key -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:06.749 00:06:06.749 00:10:21 json_config_extra_key -- common/autotest_common.sh@860 -- # return 0 00:06:06.749 00:10:21 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:06:06.749 INFO: shutting down applications... 00:06:06.749 00:10:21 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:06:06.750 00:10:21 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:06:06.750 00:10:21 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:06:06.750 00:10:21 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:06.750 00:10:21 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 74438 ]] 00:06:06.750 00:10:21 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 74438 00:06:06.750 00:10:21 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:06.750 00:10:21 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:06.750 00:10:21 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 74438 00:06:06.750 00:10:21 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:07.317 00:10:21 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:07.318 00:10:21 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:07.318 00:10:21 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 74438 00:06:07.318 00:10:21 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:07.318 00:10:21 json_config_extra_key -- json_config/common.sh@43 -- # break 00:06:07.318 00:10:21 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:07.318 SPDK target shutdown done 00:06:07.318 00:10:21 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:07.318 Success 00:06:07.318 00:10:21 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:06:07.318 00:06:07.318 real 0m1.539s 00:06:07.318 user 0m1.242s 00:06:07.318 sys 0m0.487s 00:06:07.318 00:10:21 json_config_extra_key -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:07.318 00:10:21 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:07.318 ************************************ 00:06:07.318 END TEST json_config_extra_key 00:06:07.318 ************************************ 00:06:07.318 00:10:21 -- spdk/autotest.sh@174 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:07.318 00:10:21 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:07.318 00:10:21 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:07.318 00:10:21 -- common/autotest_common.sh@10 -- # set +x 00:06:07.318 ************************************ 00:06:07.318 START TEST alias_rpc 00:06:07.318 ************************************ 00:06:07.318 00:10:21 alias_rpc -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:07.576 * Looking for test storage... 00:06:07.576 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:06:07.576 00:10:22 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:07.576 00:10:22 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=74509 00:06:07.576 00:10:22 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:07.576 00:10:22 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 74509 00:06:07.576 00:10:22 alias_rpc -- common/autotest_common.sh@827 -- # '[' -z 74509 ']' 00:06:07.576 00:10:22 alias_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:07.576 00:10:22 alias_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:07.576 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:07.576 00:10:22 alias_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:07.576 00:10:22 alias_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:07.576 00:10:22 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:07.576 [2024-07-23 00:10:22.205242] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:06:07.577 [2024-07-23 00:10:22.205444] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74509 ] 00:06:07.835 [2024-07-23 00:10:22.360048] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:07.835 [2024-07-23 00:10:22.403129] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:08.454 00:10:23 alias_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:08.454 00:10:23 alias_rpc -- common/autotest_common.sh@860 -- # return 0 00:06:08.454 00:10:23 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:06:08.713 00:10:23 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 74509 00:06:08.713 00:10:23 alias_rpc -- common/autotest_common.sh@946 -- # '[' -z 74509 ']' 00:06:08.713 00:10:23 alias_rpc -- common/autotest_common.sh@950 -- # kill -0 74509 00:06:08.713 00:10:23 alias_rpc -- common/autotest_common.sh@951 -- # uname 00:06:08.713 00:10:23 alias_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:08.713 00:10:23 alias_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 74509 00:06:08.713 00:10:23 alias_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:08.713 00:10:23 alias_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:08.713 killing process with pid 74509 00:06:08.713 00:10:23 alias_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 74509' 00:06:08.713 00:10:23 alias_rpc -- common/autotest_common.sh@965 -- # kill 74509 00:06:08.713 00:10:23 alias_rpc -- common/autotest_common.sh@970 -- # wait 74509 00:06:09.280 00:06:09.280 real 0m1.715s 00:06:09.280 user 0m1.760s 00:06:09.281 sys 0m0.503s 00:06:09.281 00:10:23 alias_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:09.281 00:10:23 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:09.281 ************************************ 00:06:09.281 END TEST alias_rpc 00:06:09.281 ************************************ 00:06:09.281 00:10:23 -- spdk/autotest.sh@176 -- # [[ 0 -eq 0 ]] 00:06:09.281 00:10:23 -- spdk/autotest.sh@177 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:06:09.281 00:10:23 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:09.281 00:10:23 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:09.281 00:10:23 -- common/autotest_common.sh@10 -- # set +x 00:06:09.281 ************************************ 00:06:09.281 START TEST spdkcli_tcp 00:06:09.281 ************************************ 00:06:09.281 00:10:23 spdkcli_tcp -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:06:09.281 * Looking for test storage... 00:06:09.281 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:06:09.281 00:10:23 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:06:09.281 00:10:23 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:06:09.281 00:10:23 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:06:09.281 00:10:23 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:06:09.281 00:10:23 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:06:09.281 00:10:23 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:06:09.281 00:10:23 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:06:09.281 00:10:23 spdkcli_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:06:09.281 00:10:23 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:09.281 00:10:23 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=74581 00:06:09.281 00:10:23 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 74581 00:06:09.281 00:10:23 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:06:09.281 00:10:23 spdkcli_tcp -- common/autotest_common.sh@827 -- # '[' -z 74581 ']' 00:06:09.281 00:10:23 spdkcli_tcp -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:09.281 00:10:23 spdkcli_tcp -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:09.281 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:09.281 00:10:23 spdkcli_tcp -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:09.281 00:10:23 spdkcli_tcp -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:09.281 00:10:23 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:09.540 [2024-07-23 00:10:24.022356] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:06:09.540 [2024-07-23 00:10:24.022519] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74581 ] 00:06:09.540 [2024-07-23 00:10:24.179186] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:09.799 [2024-07-23 00:10:24.229351] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.799 [2024-07-23 00:10:24.229473] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:10.366 00:10:24 spdkcli_tcp -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:10.366 00:10:24 spdkcli_tcp -- common/autotest_common.sh@860 -- # return 0 00:06:10.366 00:10:24 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:06:10.366 00:10:24 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=74598 00:06:10.366 00:10:24 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:06:10.366 [ 00:06:10.366 "bdev_malloc_delete", 00:06:10.366 "bdev_malloc_create", 00:06:10.366 "bdev_null_resize", 00:06:10.366 "bdev_null_delete", 00:06:10.366 "bdev_null_create", 00:06:10.366 "bdev_nvme_cuse_unregister", 00:06:10.366 "bdev_nvme_cuse_register", 00:06:10.366 "bdev_opal_new_user", 00:06:10.366 "bdev_opal_set_lock_state", 00:06:10.366 "bdev_opal_delete", 00:06:10.366 "bdev_opal_get_info", 00:06:10.366 "bdev_opal_create", 00:06:10.366 "bdev_nvme_opal_revert", 00:06:10.366 "bdev_nvme_opal_init", 00:06:10.366 "bdev_nvme_send_cmd", 00:06:10.366 "bdev_nvme_get_path_iostat", 00:06:10.366 "bdev_nvme_get_mdns_discovery_info", 00:06:10.366 "bdev_nvme_stop_mdns_discovery", 00:06:10.366 "bdev_nvme_start_mdns_discovery", 00:06:10.366 "bdev_nvme_set_multipath_policy", 00:06:10.367 "bdev_nvme_set_preferred_path", 00:06:10.367 "bdev_nvme_get_io_paths", 00:06:10.367 "bdev_nvme_remove_error_injection", 00:06:10.367 "bdev_nvme_add_error_injection", 00:06:10.367 "bdev_nvme_get_discovery_info", 00:06:10.367 "bdev_nvme_stop_discovery", 00:06:10.367 "bdev_nvme_start_discovery", 00:06:10.367 "bdev_nvme_get_controller_health_info", 00:06:10.367 "bdev_nvme_disable_controller", 00:06:10.367 "bdev_nvme_enable_controller", 00:06:10.367 "bdev_nvme_reset_controller", 00:06:10.367 "bdev_nvme_get_transport_statistics", 00:06:10.367 "bdev_nvme_apply_firmware", 00:06:10.367 "bdev_nvme_detach_controller", 00:06:10.367 "bdev_nvme_get_controllers", 00:06:10.367 "bdev_nvme_attach_controller", 00:06:10.367 "bdev_nvme_set_hotplug", 00:06:10.367 "bdev_nvme_set_options", 00:06:10.367 "bdev_passthru_delete", 00:06:10.367 "bdev_passthru_create", 00:06:10.367 "bdev_lvol_set_parent_bdev", 00:06:10.367 "bdev_lvol_set_parent", 00:06:10.367 "bdev_lvol_check_shallow_copy", 00:06:10.367 "bdev_lvol_start_shallow_copy", 00:06:10.367 "bdev_lvol_grow_lvstore", 00:06:10.367 "bdev_lvol_get_lvols", 00:06:10.367 "bdev_lvol_get_lvstores", 00:06:10.367 "bdev_lvol_delete", 00:06:10.367 "bdev_lvol_set_read_only", 00:06:10.367 "bdev_lvol_resize", 00:06:10.367 "bdev_lvol_decouple_parent", 00:06:10.367 "bdev_lvol_inflate", 00:06:10.367 "bdev_lvol_rename", 00:06:10.367 "bdev_lvol_clone_bdev", 00:06:10.367 "bdev_lvol_clone", 00:06:10.367 "bdev_lvol_snapshot", 00:06:10.367 "bdev_lvol_create", 00:06:10.367 "bdev_lvol_delete_lvstore", 00:06:10.367 "bdev_lvol_rename_lvstore", 00:06:10.367 "bdev_lvol_create_lvstore", 00:06:10.367 "bdev_raid_set_options", 00:06:10.367 "bdev_raid_remove_base_bdev", 00:06:10.367 "bdev_raid_add_base_bdev", 00:06:10.367 "bdev_raid_delete", 00:06:10.367 "bdev_raid_create", 00:06:10.367 "bdev_raid_get_bdevs", 00:06:10.367 "bdev_error_inject_error", 00:06:10.367 "bdev_error_delete", 00:06:10.367 "bdev_error_create", 00:06:10.367 "bdev_split_delete", 00:06:10.367 "bdev_split_create", 00:06:10.367 "bdev_delay_delete", 00:06:10.367 "bdev_delay_create", 00:06:10.367 "bdev_delay_update_latency", 00:06:10.367 "bdev_zone_block_delete", 00:06:10.367 "bdev_zone_block_create", 00:06:10.367 "blobfs_create", 00:06:10.367 "blobfs_detect", 00:06:10.367 "blobfs_set_cache_size", 00:06:10.367 "bdev_xnvme_delete", 00:06:10.367 "bdev_xnvme_create", 00:06:10.367 "bdev_aio_delete", 00:06:10.367 "bdev_aio_rescan", 00:06:10.367 "bdev_aio_create", 00:06:10.367 "bdev_ftl_set_property", 00:06:10.367 "bdev_ftl_get_properties", 00:06:10.367 "bdev_ftl_get_stats", 00:06:10.367 "bdev_ftl_unmap", 00:06:10.367 "bdev_ftl_unload", 00:06:10.367 "bdev_ftl_delete", 00:06:10.367 "bdev_ftl_load", 00:06:10.367 "bdev_ftl_create", 00:06:10.367 "bdev_virtio_attach_controller", 00:06:10.367 "bdev_virtio_scsi_get_devices", 00:06:10.367 "bdev_virtio_detach_controller", 00:06:10.367 "bdev_virtio_blk_set_hotplug", 00:06:10.367 "bdev_iscsi_delete", 00:06:10.367 "bdev_iscsi_create", 00:06:10.367 "bdev_iscsi_set_options", 00:06:10.367 "accel_error_inject_error", 00:06:10.367 "ioat_scan_accel_module", 00:06:10.367 "dsa_scan_accel_module", 00:06:10.367 "iaa_scan_accel_module", 00:06:10.367 "keyring_file_remove_key", 00:06:10.367 "keyring_file_add_key", 00:06:10.367 "keyring_linux_set_options", 00:06:10.367 "iscsi_get_histogram", 00:06:10.367 "iscsi_enable_histogram", 00:06:10.367 "iscsi_set_options", 00:06:10.367 "iscsi_get_auth_groups", 00:06:10.367 "iscsi_auth_group_remove_secret", 00:06:10.367 "iscsi_auth_group_add_secret", 00:06:10.367 "iscsi_delete_auth_group", 00:06:10.367 "iscsi_create_auth_group", 00:06:10.367 "iscsi_set_discovery_auth", 00:06:10.367 "iscsi_get_options", 00:06:10.367 "iscsi_target_node_request_logout", 00:06:10.367 "iscsi_target_node_set_redirect", 00:06:10.367 "iscsi_target_node_set_auth", 00:06:10.367 "iscsi_target_node_add_lun", 00:06:10.367 "iscsi_get_stats", 00:06:10.367 "iscsi_get_connections", 00:06:10.367 "iscsi_portal_group_set_auth", 00:06:10.367 "iscsi_start_portal_group", 00:06:10.367 "iscsi_delete_portal_group", 00:06:10.367 "iscsi_create_portal_group", 00:06:10.367 "iscsi_get_portal_groups", 00:06:10.367 "iscsi_delete_target_node", 00:06:10.367 "iscsi_target_node_remove_pg_ig_maps", 00:06:10.367 "iscsi_target_node_add_pg_ig_maps", 00:06:10.367 "iscsi_create_target_node", 00:06:10.367 "iscsi_get_target_nodes", 00:06:10.367 "iscsi_delete_initiator_group", 00:06:10.367 "iscsi_initiator_group_remove_initiators", 00:06:10.367 "iscsi_initiator_group_add_initiators", 00:06:10.367 "iscsi_create_initiator_group", 00:06:10.367 "iscsi_get_initiator_groups", 00:06:10.367 "nvmf_set_crdt", 00:06:10.367 "nvmf_set_config", 00:06:10.367 "nvmf_set_max_subsystems", 00:06:10.367 "nvmf_stop_mdns_prr", 00:06:10.367 "nvmf_publish_mdns_prr", 00:06:10.367 "nvmf_subsystem_get_listeners", 00:06:10.367 "nvmf_subsystem_get_qpairs", 00:06:10.367 "nvmf_subsystem_get_controllers", 00:06:10.367 "nvmf_get_stats", 00:06:10.367 "nvmf_get_transports", 00:06:10.367 "nvmf_create_transport", 00:06:10.367 "nvmf_get_targets", 00:06:10.367 "nvmf_delete_target", 00:06:10.367 "nvmf_create_target", 00:06:10.367 "nvmf_subsystem_allow_any_host", 00:06:10.367 "nvmf_subsystem_remove_host", 00:06:10.367 "nvmf_subsystem_add_host", 00:06:10.367 "nvmf_ns_remove_host", 00:06:10.367 "nvmf_ns_add_host", 00:06:10.367 "nvmf_subsystem_remove_ns", 00:06:10.367 "nvmf_subsystem_add_ns", 00:06:10.367 "nvmf_subsystem_listener_set_ana_state", 00:06:10.367 "nvmf_discovery_get_referrals", 00:06:10.367 "nvmf_discovery_remove_referral", 00:06:10.367 "nvmf_discovery_add_referral", 00:06:10.367 "nvmf_subsystem_remove_listener", 00:06:10.367 "nvmf_subsystem_add_listener", 00:06:10.367 "nvmf_delete_subsystem", 00:06:10.367 "nvmf_create_subsystem", 00:06:10.367 "nvmf_get_subsystems", 00:06:10.367 "env_dpdk_get_mem_stats", 00:06:10.367 "nbd_get_disks", 00:06:10.367 "nbd_stop_disk", 00:06:10.367 "nbd_start_disk", 00:06:10.367 "ublk_recover_disk", 00:06:10.367 "ublk_get_disks", 00:06:10.367 "ublk_stop_disk", 00:06:10.367 "ublk_start_disk", 00:06:10.367 "ublk_destroy_target", 00:06:10.367 "ublk_create_target", 00:06:10.367 "virtio_blk_create_transport", 00:06:10.367 "virtio_blk_get_transports", 00:06:10.367 "vhost_controller_set_coalescing", 00:06:10.367 "vhost_get_controllers", 00:06:10.367 "vhost_delete_controller", 00:06:10.367 "vhost_create_blk_controller", 00:06:10.367 "vhost_scsi_controller_remove_target", 00:06:10.367 "vhost_scsi_controller_add_target", 00:06:10.367 "vhost_start_scsi_controller", 00:06:10.367 "vhost_create_scsi_controller", 00:06:10.367 "thread_set_cpumask", 00:06:10.367 "framework_get_scheduler", 00:06:10.367 "framework_set_scheduler", 00:06:10.367 "framework_get_reactors", 00:06:10.367 "thread_get_io_channels", 00:06:10.367 "thread_get_pollers", 00:06:10.367 "thread_get_stats", 00:06:10.367 "framework_monitor_context_switch", 00:06:10.367 "spdk_kill_instance", 00:06:10.367 "log_enable_timestamps", 00:06:10.367 "log_get_flags", 00:06:10.367 "log_clear_flag", 00:06:10.367 "log_set_flag", 00:06:10.367 "log_get_level", 00:06:10.367 "log_set_level", 00:06:10.367 "log_get_print_level", 00:06:10.367 "log_set_print_level", 00:06:10.367 "framework_enable_cpumask_locks", 00:06:10.367 "framework_disable_cpumask_locks", 00:06:10.367 "framework_wait_init", 00:06:10.367 "framework_start_init", 00:06:10.367 "scsi_get_devices", 00:06:10.367 "bdev_get_histogram", 00:06:10.367 "bdev_enable_histogram", 00:06:10.367 "bdev_set_qos_limit", 00:06:10.367 "bdev_set_qd_sampling_period", 00:06:10.367 "bdev_get_bdevs", 00:06:10.367 "bdev_reset_iostat", 00:06:10.367 "bdev_get_iostat", 00:06:10.367 "bdev_examine", 00:06:10.367 "bdev_wait_for_examine", 00:06:10.367 "bdev_set_options", 00:06:10.367 "notify_get_notifications", 00:06:10.367 "notify_get_types", 00:06:10.367 "accel_get_stats", 00:06:10.367 "accel_set_options", 00:06:10.367 "accel_set_driver", 00:06:10.367 "accel_crypto_key_destroy", 00:06:10.367 "accel_crypto_keys_get", 00:06:10.367 "accel_crypto_key_create", 00:06:10.367 "accel_assign_opc", 00:06:10.367 "accel_get_module_info", 00:06:10.367 "accel_get_opc_assignments", 00:06:10.367 "vmd_rescan", 00:06:10.367 "vmd_remove_device", 00:06:10.367 "vmd_enable", 00:06:10.367 "sock_get_default_impl", 00:06:10.367 "sock_set_default_impl", 00:06:10.367 "sock_impl_set_options", 00:06:10.367 "sock_impl_get_options", 00:06:10.367 "iobuf_get_stats", 00:06:10.367 "iobuf_set_options", 00:06:10.367 "framework_get_pci_devices", 00:06:10.367 "framework_get_config", 00:06:10.367 "framework_get_subsystems", 00:06:10.367 "trace_get_info", 00:06:10.367 "trace_get_tpoint_group_mask", 00:06:10.367 "trace_disable_tpoint_group", 00:06:10.367 "trace_enable_tpoint_group", 00:06:10.367 "trace_clear_tpoint_mask", 00:06:10.367 "trace_set_tpoint_mask", 00:06:10.367 "keyring_get_keys", 00:06:10.367 "spdk_get_version", 00:06:10.367 "rpc_get_methods" 00:06:10.367 ] 00:06:10.367 00:10:24 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:06:10.367 00:10:24 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:10.368 00:10:24 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:10.368 00:10:25 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:06:10.368 00:10:25 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 74581 00:06:10.368 00:10:25 spdkcli_tcp -- common/autotest_common.sh@946 -- # '[' -z 74581 ']' 00:06:10.368 00:10:25 spdkcli_tcp -- common/autotest_common.sh@950 -- # kill -0 74581 00:06:10.368 00:10:25 spdkcli_tcp -- common/autotest_common.sh@951 -- # uname 00:06:10.368 00:10:25 spdkcli_tcp -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:10.368 00:10:25 spdkcli_tcp -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 74581 00:06:10.626 killing process with pid 74581 00:06:10.626 00:10:25 spdkcli_tcp -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:10.626 00:10:25 spdkcli_tcp -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:10.626 00:10:25 spdkcli_tcp -- common/autotest_common.sh@964 -- # echo 'killing process with pid 74581' 00:06:10.626 00:10:25 spdkcli_tcp -- common/autotest_common.sh@965 -- # kill 74581 00:06:10.626 00:10:25 spdkcli_tcp -- common/autotest_common.sh@970 -- # wait 74581 00:06:10.885 ************************************ 00:06:10.885 END TEST spdkcli_tcp 00:06:10.885 ************************************ 00:06:10.885 00:06:10.885 real 0m1.690s 00:06:10.885 user 0m2.718s 00:06:10.885 sys 0m0.597s 00:06:10.885 00:10:25 spdkcli_tcp -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:10.885 00:10:25 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:10.885 00:10:25 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:10.885 00:10:25 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:10.885 00:10:25 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:10.885 00:10:25 -- common/autotest_common.sh@10 -- # set +x 00:06:10.885 ************************************ 00:06:10.885 START TEST dpdk_mem_utility 00:06:10.885 ************************************ 00:06:10.885 00:10:25 dpdk_mem_utility -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:11.144 * Looking for test storage... 00:06:11.144 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:06:11.144 00:10:25 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:06:11.144 00:10:25 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=74673 00:06:11.144 00:10:25 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:11.144 00:10:25 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 74673 00:06:11.144 00:10:25 dpdk_mem_utility -- common/autotest_common.sh@827 -- # '[' -z 74673 ']' 00:06:11.144 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:11.144 00:10:25 dpdk_mem_utility -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:11.144 00:10:25 dpdk_mem_utility -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:11.144 00:10:25 dpdk_mem_utility -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:11.144 00:10:25 dpdk_mem_utility -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:11.144 00:10:25 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:11.144 [2024-07-23 00:10:25.749851] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:06:11.144 [2024-07-23 00:10:25.749980] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74673 ] 00:06:11.403 [2024-07-23 00:10:25.902623] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:11.403 [2024-07-23 00:10:25.945728] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:11.971 00:10:26 dpdk_mem_utility -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:11.971 00:10:26 dpdk_mem_utility -- common/autotest_common.sh@860 -- # return 0 00:06:11.971 00:10:26 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:06:11.971 00:10:26 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:06:11.971 00:10:26 dpdk_mem_utility -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:11.971 00:10:26 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:11.971 { 00:06:11.971 "filename": "/tmp/spdk_mem_dump.txt" 00:06:11.971 } 00:06:11.971 00:10:26 dpdk_mem_utility -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:11.971 00:10:26 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:06:11.971 DPDK memory size 814.000000 MiB in 1 heap(s) 00:06:11.971 1 heaps totaling size 814.000000 MiB 00:06:11.971 size: 814.000000 MiB heap id: 0 00:06:11.971 end heaps---------- 00:06:11.971 8 mempools totaling size 598.116089 MiB 00:06:11.971 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:06:11.971 size: 158.602051 MiB name: PDU_data_out_Pool 00:06:11.971 size: 84.521057 MiB name: bdev_io_74673 00:06:11.971 size: 51.011292 MiB name: evtpool_74673 00:06:11.971 size: 50.003479 MiB name: msgpool_74673 00:06:11.971 size: 21.763794 MiB name: PDU_Pool 00:06:11.971 size: 19.513306 MiB name: SCSI_TASK_Pool 00:06:11.971 size: 0.026123 MiB name: Session_Pool 00:06:11.971 end mempools------- 00:06:11.971 6 memzones totaling size 4.142822 MiB 00:06:11.971 size: 1.000366 MiB name: RG_ring_0_74673 00:06:11.971 size: 1.000366 MiB name: RG_ring_1_74673 00:06:11.971 size: 1.000366 MiB name: RG_ring_4_74673 00:06:11.971 size: 1.000366 MiB name: RG_ring_5_74673 00:06:11.971 size: 0.125366 MiB name: RG_ring_2_74673 00:06:11.971 size: 0.015991 MiB name: RG_ring_3_74673 00:06:11.971 end memzones------- 00:06:11.971 00:10:26 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:06:11.971 heap id: 0 total size: 814.000000 MiB number of busy elements: 300 number of free elements: 15 00:06:11.971 list of free elements. size: 12.471924 MiB 00:06:11.971 element at address: 0x200000400000 with size: 1.999512 MiB 00:06:11.971 element at address: 0x200018e00000 with size: 0.999878 MiB 00:06:11.971 element at address: 0x200019000000 with size: 0.999878 MiB 00:06:11.971 element at address: 0x200003e00000 with size: 0.996277 MiB 00:06:11.971 element at address: 0x200031c00000 with size: 0.994446 MiB 00:06:11.971 element at address: 0x200013800000 with size: 0.978699 MiB 00:06:11.971 element at address: 0x200007000000 with size: 0.959839 MiB 00:06:11.971 element at address: 0x200019200000 with size: 0.936584 MiB 00:06:11.971 element at address: 0x200000200000 with size: 0.833191 MiB 00:06:11.971 element at address: 0x20001aa00000 with size: 0.568420 MiB 00:06:11.971 element at address: 0x20000b200000 with size: 0.489624 MiB 00:06:11.971 element at address: 0x200000800000 with size: 0.486145 MiB 00:06:11.971 element at address: 0x200019400000 with size: 0.485657 MiB 00:06:11.971 element at address: 0x200027e00000 with size: 0.395935 MiB 00:06:11.971 element at address: 0x200003a00000 with size: 0.347839 MiB 00:06:11.971 list of standard malloc elements. size: 199.265503 MiB 00:06:11.971 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:06:11.971 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:06:11.971 element at address: 0x200018efff80 with size: 1.000122 MiB 00:06:11.971 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:06:11.971 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:06:11.971 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:06:11.971 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:06:11.971 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:06:11.971 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:06:11.971 element at address: 0x2000002d54c0 with size: 0.000183 MiB 00:06:11.971 element at address: 0x2000002d5580 with size: 0.000183 MiB 00:06:11.971 element at address: 0x2000002d5640 with size: 0.000183 MiB 00:06:11.971 element at address: 0x2000002d5700 with size: 0.000183 MiB 00:06:11.971 element at address: 0x2000002d57c0 with size: 0.000183 MiB 00:06:11.971 element at address: 0x2000002d5880 with size: 0.000183 MiB 00:06:11.971 element at address: 0x2000002d5940 with size: 0.000183 MiB 00:06:11.971 element at address: 0x2000002d5a00 with size: 0.000183 MiB 00:06:11.971 element at address: 0x2000002d5ac0 with size: 0.000183 MiB 00:06:11.971 element at address: 0x2000002d5b80 with size: 0.000183 MiB 00:06:11.971 element at address: 0x2000002d5c40 with size: 0.000183 MiB 00:06:11.971 element at address: 0x2000002d5d00 with size: 0.000183 MiB 00:06:11.971 element at address: 0x2000002d5dc0 with size: 0.000183 MiB 00:06:11.971 element at address: 0x2000002d5e80 with size: 0.000183 MiB 00:06:11.971 element at address: 0x2000002d5f40 with size: 0.000183 MiB 00:06:11.971 element at address: 0x2000002d6000 with size: 0.000183 MiB 00:06:11.971 element at address: 0x2000002d60c0 with size: 0.000183 MiB 00:06:11.971 element at address: 0x2000002d6180 with size: 0.000183 MiB 00:06:11.971 element at address: 0x2000002d6240 with size: 0.000183 MiB 00:06:11.971 element at address: 0x2000002d6300 with size: 0.000183 MiB 00:06:11.971 element at address: 0x2000002d63c0 with size: 0.000183 MiB 00:06:11.971 element at address: 0x2000002d6480 with size: 0.000183 MiB 00:06:11.971 element at address: 0x2000002d6540 with size: 0.000183 MiB 00:06:11.971 element at address: 0x2000002d6600 with size: 0.000183 MiB 00:06:11.971 element at address: 0x2000002d66c0 with size: 0.000183 MiB 00:06:11.971 element at address: 0x2000002d68c0 with size: 0.000183 MiB 00:06:11.971 element at address: 0x2000002d6980 with size: 0.000183 MiB 00:06:11.971 element at address: 0x2000002d6a40 with size: 0.000183 MiB 00:06:11.971 element at address: 0x2000002d6b00 with size: 0.000183 MiB 00:06:11.971 element at address: 0x2000002d6bc0 with size: 0.000183 MiB 00:06:11.971 element at address: 0x2000002d6c80 with size: 0.000183 MiB 00:06:11.971 element at address: 0x2000002d6d40 with size: 0.000183 MiB 00:06:11.971 element at address: 0x2000002d6e00 with size: 0.000183 MiB 00:06:11.971 element at address: 0x2000002d6ec0 with size: 0.000183 MiB 00:06:11.971 element at address: 0x2000002d6f80 with size: 0.000183 MiB 00:06:11.971 element at address: 0x2000002d7040 with size: 0.000183 MiB 00:06:11.971 element at address: 0x2000002d7100 with size: 0.000183 MiB 00:06:11.971 element at address: 0x2000002d71c0 with size: 0.000183 MiB 00:06:11.971 element at address: 0x2000002d7280 with size: 0.000183 MiB 00:06:11.971 element at address: 0x2000002d7340 with size: 0.000183 MiB 00:06:11.971 element at address: 0x2000002d7400 with size: 0.000183 MiB 00:06:11.971 element at address: 0x2000002d74c0 with size: 0.000183 MiB 00:06:11.971 element at address: 0x2000002d7580 with size: 0.000183 MiB 00:06:11.971 element at address: 0x2000002d7640 with size: 0.000183 MiB 00:06:11.971 element at address: 0x2000002d7700 with size: 0.000183 MiB 00:06:11.971 element at address: 0x2000002d77c0 with size: 0.000183 MiB 00:06:11.971 element at address: 0x2000002d7880 with size: 0.000183 MiB 00:06:11.971 element at address: 0x2000002d7940 with size: 0.000183 MiB 00:06:11.971 element at address: 0x2000002d7a00 with size: 0.000183 MiB 00:06:11.971 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:06:11.971 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:06:11.971 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:06:11.971 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:06:11.971 element at address: 0x20000087c740 with size: 0.000183 MiB 00:06:11.971 element at address: 0x20000087c800 with size: 0.000183 MiB 00:06:11.971 element at address: 0x20000087c8c0 with size: 0.000183 MiB 00:06:11.971 element at address: 0x20000087c980 with size: 0.000183 MiB 00:06:11.971 element at address: 0x20000087ca40 with size: 0.000183 MiB 00:06:11.971 element at address: 0x20000087cb00 with size: 0.000183 MiB 00:06:11.971 element at address: 0x20000087cbc0 with size: 0.000183 MiB 00:06:11.971 element at address: 0x20000087cc80 with size: 0.000183 MiB 00:06:11.971 element at address: 0x20000087cd40 with size: 0.000183 MiB 00:06:11.971 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:06:11.971 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:06:11.972 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:06:11.972 element at address: 0x200003a590c0 with size: 0.000183 MiB 00:06:11.972 element at address: 0x200003a59180 with size: 0.000183 MiB 00:06:11.972 element at address: 0x200003a59240 with size: 0.000183 MiB 00:06:11.972 element at address: 0x200003a59300 with size: 0.000183 MiB 00:06:11.972 element at address: 0x200003a593c0 with size: 0.000183 MiB 00:06:11.972 element at address: 0x200003a59480 with size: 0.000183 MiB 00:06:11.972 element at address: 0x200003a59540 with size: 0.000183 MiB 00:06:11.972 element at address: 0x200003a59600 with size: 0.000183 MiB 00:06:11.972 element at address: 0x200003a596c0 with size: 0.000183 MiB 00:06:11.972 element at address: 0x200003a59780 with size: 0.000183 MiB 00:06:11.972 element at address: 0x200003a59840 with size: 0.000183 MiB 00:06:11.972 element at address: 0x200003a59900 with size: 0.000183 MiB 00:06:11.972 element at address: 0x200003a599c0 with size: 0.000183 MiB 00:06:11.972 element at address: 0x200003a59a80 with size: 0.000183 MiB 00:06:11.972 element at address: 0x200003a59b40 with size: 0.000183 MiB 00:06:11.972 element at address: 0x200003a59c00 with size: 0.000183 MiB 00:06:11.972 element at address: 0x200003a59cc0 with size: 0.000183 MiB 00:06:11.972 element at address: 0x200003a59d80 with size: 0.000183 MiB 00:06:11.972 element at address: 0x200003a59e40 with size: 0.000183 MiB 00:06:11.972 element at address: 0x200003a59f00 with size: 0.000183 MiB 00:06:11.972 element at address: 0x200003a59fc0 with size: 0.000183 MiB 00:06:11.972 element at address: 0x200003a5a080 with size: 0.000183 MiB 00:06:11.972 element at address: 0x200003a5a140 with size: 0.000183 MiB 00:06:11.972 element at address: 0x200003a5a200 with size: 0.000183 MiB 00:06:11.972 element at address: 0x200003a5a2c0 with size: 0.000183 MiB 00:06:11.972 element at address: 0x200003a5a380 with size: 0.000183 MiB 00:06:11.972 element at address: 0x200003a5a440 with size: 0.000183 MiB 00:06:11.972 element at address: 0x200003a5a500 with size: 0.000183 MiB 00:06:11.972 element at address: 0x200003a5a5c0 with size: 0.000183 MiB 00:06:11.972 element at address: 0x200003a5a680 with size: 0.000183 MiB 00:06:11.972 element at address: 0x200003a5a740 with size: 0.000183 MiB 00:06:11.972 element at address: 0x200003a5a800 with size: 0.000183 MiB 00:06:11.972 element at address: 0x200003a5a8c0 with size: 0.000183 MiB 00:06:11.972 element at address: 0x200003a5a980 with size: 0.000183 MiB 00:06:11.972 element at address: 0x200003a5aa40 with size: 0.000183 MiB 00:06:11.972 element at address: 0x200003a5ab00 with size: 0.000183 MiB 00:06:11.972 element at address: 0x200003a5abc0 with size: 0.000183 MiB 00:06:11.972 element at address: 0x200003a5ac80 with size: 0.000183 MiB 00:06:11.972 element at address: 0x200003a5ad40 with size: 0.000183 MiB 00:06:11.972 element at address: 0x200003a5ae00 with size: 0.000183 MiB 00:06:11.972 element at address: 0x200003a5aec0 with size: 0.000183 MiB 00:06:11.972 element at address: 0x200003a5af80 with size: 0.000183 MiB 00:06:11.972 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:06:11.972 element at address: 0x200003adb300 with size: 0.000183 MiB 00:06:11.972 element at address: 0x200003adb500 with size: 0.000183 MiB 00:06:11.972 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:06:11.972 element at address: 0x200003affa80 with size: 0.000183 MiB 00:06:11.972 element at address: 0x200003affb40 with size: 0.000183 MiB 00:06:11.972 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:06:11.972 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:06:11.972 element at address: 0x20000b27d580 with size: 0.000183 MiB 00:06:11.972 element at address: 0x20000b27d640 with size: 0.000183 MiB 00:06:11.972 element at address: 0x20000b27d700 with size: 0.000183 MiB 00:06:11.972 element at address: 0x20000b27d7c0 with size: 0.000183 MiB 00:06:11.972 element at address: 0x20000b27d880 with size: 0.000183 MiB 00:06:11.972 element at address: 0x20000b27d940 with size: 0.000183 MiB 00:06:11.972 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:06:11.972 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:06:11.972 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:06:11.972 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:06:11.972 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:06:11.972 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:06:11.972 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:06:11.972 element at address: 0x20001aa91840 with size: 0.000183 MiB 00:06:11.972 element at address: 0x20001aa91900 with size: 0.000183 MiB 00:06:11.972 element at address: 0x20001aa919c0 with size: 0.000183 MiB 00:06:11.972 element at address: 0x20001aa91a80 with size: 0.000183 MiB 00:06:11.972 element at address: 0x20001aa91b40 with size: 0.000183 MiB 00:06:11.972 element at address: 0x20001aa91c00 with size: 0.000183 MiB 00:06:11.972 element at address: 0x20001aa91cc0 with size: 0.000183 MiB 00:06:11.972 element at address: 0x20001aa91d80 with size: 0.000183 MiB 00:06:11.972 element at address: 0x20001aa91e40 with size: 0.000183 MiB 00:06:11.972 element at address: 0x20001aa91f00 with size: 0.000183 MiB 00:06:11.972 element at address: 0x20001aa91fc0 with size: 0.000183 MiB 00:06:11.972 element at address: 0x20001aa92080 with size: 0.000183 MiB 00:06:11.972 element at address: 0x20001aa92140 with size: 0.000183 MiB 00:06:11.972 element at address: 0x20001aa92200 with size: 0.000183 MiB 00:06:11.972 element at address: 0x20001aa922c0 with size: 0.000183 MiB 00:06:11.972 element at address: 0x20001aa92380 with size: 0.000183 MiB 00:06:11.972 element at address: 0x20001aa92440 with size: 0.000183 MiB 00:06:11.972 element at address: 0x20001aa92500 with size: 0.000183 MiB 00:06:11.972 element at address: 0x20001aa925c0 with size: 0.000183 MiB 00:06:11.972 element at address: 0x20001aa92680 with size: 0.000183 MiB 00:06:11.972 element at address: 0x20001aa92740 with size: 0.000183 MiB 00:06:11.972 element at address: 0x20001aa92800 with size: 0.000183 MiB 00:06:11.972 element at address: 0x20001aa928c0 with size: 0.000183 MiB 00:06:11.972 element at address: 0x20001aa92980 with size: 0.000183 MiB 00:06:11.972 element at address: 0x20001aa92a40 with size: 0.000183 MiB 00:06:11.972 element at address: 0x20001aa92b00 with size: 0.000183 MiB 00:06:11.972 element at address: 0x20001aa92bc0 with size: 0.000183 MiB 00:06:11.972 element at address: 0x20001aa92c80 with size: 0.000183 MiB 00:06:11.972 element at address: 0x20001aa92d40 with size: 0.000183 MiB 00:06:11.972 element at address: 0x20001aa92e00 with size: 0.000183 MiB 00:06:11.972 element at address: 0x20001aa92ec0 with size: 0.000183 MiB 00:06:11.972 element at address: 0x20001aa92f80 with size: 0.000183 MiB 00:06:11.972 element at address: 0x20001aa93040 with size: 0.000183 MiB 00:06:11.972 element at address: 0x20001aa93100 with size: 0.000183 MiB 00:06:11.972 element at address: 0x20001aa931c0 with size: 0.000183 MiB 00:06:11.972 element at address: 0x20001aa93280 with size: 0.000183 MiB 00:06:11.972 element at address: 0x20001aa93340 with size: 0.000183 MiB 00:06:11.972 element at address: 0x20001aa93400 with size: 0.000183 MiB 00:06:11.972 element at address: 0x20001aa934c0 with size: 0.000183 MiB 00:06:11.972 element at address: 0x20001aa93580 with size: 0.000183 MiB 00:06:11.972 element at address: 0x20001aa93640 with size: 0.000183 MiB 00:06:11.972 element at address: 0x20001aa93700 with size: 0.000183 MiB 00:06:11.972 element at address: 0x20001aa937c0 with size: 0.000183 MiB 00:06:11.972 element at address: 0x20001aa93880 with size: 0.000183 MiB 00:06:11.972 element at address: 0x20001aa93940 with size: 0.000183 MiB 00:06:11.972 element at address: 0x20001aa93a00 with size: 0.000183 MiB 00:06:11.972 element at address: 0x20001aa93ac0 with size: 0.000183 MiB 00:06:11.972 element at address: 0x20001aa93b80 with size: 0.000183 MiB 00:06:11.972 element at address: 0x20001aa93c40 with size: 0.000183 MiB 00:06:11.972 element at address: 0x20001aa93d00 with size: 0.000183 MiB 00:06:11.972 element at address: 0x20001aa93dc0 with size: 0.000183 MiB 00:06:11.972 element at address: 0x20001aa93e80 with size: 0.000183 MiB 00:06:11.972 element at address: 0x20001aa93f40 with size: 0.000183 MiB 00:06:11.972 element at address: 0x20001aa94000 with size: 0.000183 MiB 00:06:11.972 element at address: 0x20001aa940c0 with size: 0.000183 MiB 00:06:11.972 element at address: 0x20001aa94180 with size: 0.000183 MiB 00:06:11.972 element at address: 0x20001aa94240 with size: 0.000183 MiB 00:06:11.972 element at address: 0x20001aa94300 with size: 0.000183 MiB 00:06:11.972 element at address: 0x20001aa943c0 with size: 0.000183 MiB 00:06:11.972 element at address: 0x20001aa94480 with size: 0.000183 MiB 00:06:11.972 element at address: 0x20001aa94540 with size: 0.000183 MiB 00:06:11.972 element at address: 0x20001aa94600 with size: 0.000183 MiB 00:06:11.972 element at address: 0x20001aa946c0 with size: 0.000183 MiB 00:06:11.972 element at address: 0x20001aa94780 with size: 0.000183 MiB 00:06:11.972 element at address: 0x20001aa94840 with size: 0.000183 MiB 00:06:11.972 element at address: 0x20001aa94900 with size: 0.000183 MiB 00:06:11.972 element at address: 0x20001aa949c0 with size: 0.000183 MiB 00:06:11.972 element at address: 0x20001aa94a80 with size: 0.000183 MiB 00:06:11.972 element at address: 0x20001aa94b40 with size: 0.000183 MiB 00:06:11.972 element at address: 0x20001aa94c00 with size: 0.000183 MiB 00:06:11.972 element at address: 0x20001aa94cc0 with size: 0.000183 MiB 00:06:11.972 element at address: 0x20001aa94d80 with size: 0.000183 MiB 00:06:11.972 element at address: 0x20001aa94e40 with size: 0.000183 MiB 00:06:11.972 element at address: 0x20001aa94f00 with size: 0.000183 MiB 00:06:11.972 element at address: 0x20001aa94fc0 with size: 0.000183 MiB 00:06:11.972 element at address: 0x20001aa95080 with size: 0.000183 MiB 00:06:11.973 element at address: 0x20001aa95140 with size: 0.000183 MiB 00:06:11.973 element at address: 0x20001aa95200 with size: 0.000183 MiB 00:06:11.973 element at address: 0x20001aa952c0 with size: 0.000183 MiB 00:06:11.973 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:06:11.973 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:06:11.973 element at address: 0x200027e655c0 with size: 0.000183 MiB 00:06:11.973 element at address: 0x200027e65680 with size: 0.000183 MiB 00:06:11.973 element at address: 0x200027e6c280 with size: 0.000183 MiB 00:06:11.973 element at address: 0x200027e6c480 with size: 0.000183 MiB 00:06:11.973 element at address: 0x200027e6c540 with size: 0.000183 MiB 00:06:11.973 element at address: 0x200027e6c600 with size: 0.000183 MiB 00:06:11.973 element at address: 0x200027e6c6c0 with size: 0.000183 MiB 00:06:11.973 element at address: 0x200027e6c780 with size: 0.000183 MiB 00:06:11.973 element at address: 0x200027e6c840 with size: 0.000183 MiB 00:06:11.973 element at address: 0x200027e6c900 with size: 0.000183 MiB 00:06:11.973 element at address: 0x200027e6c9c0 with size: 0.000183 MiB 00:06:11.973 element at address: 0x200027e6ca80 with size: 0.000183 MiB 00:06:11.973 element at address: 0x200027e6cb40 with size: 0.000183 MiB 00:06:11.973 element at address: 0x200027e6cc00 with size: 0.000183 MiB 00:06:11.973 element at address: 0x200027e6ccc0 with size: 0.000183 MiB 00:06:11.973 element at address: 0x200027e6cd80 with size: 0.000183 MiB 00:06:11.973 element at address: 0x200027e6ce40 with size: 0.000183 MiB 00:06:11.973 element at address: 0x200027e6cf00 with size: 0.000183 MiB 00:06:11.973 element at address: 0x200027e6cfc0 with size: 0.000183 MiB 00:06:11.973 element at address: 0x200027e6d080 with size: 0.000183 MiB 00:06:11.973 element at address: 0x200027e6d140 with size: 0.000183 MiB 00:06:11.973 element at address: 0x200027e6d200 with size: 0.000183 MiB 00:06:11.973 element at address: 0x200027e6d2c0 with size: 0.000183 MiB 00:06:11.973 element at address: 0x200027e6d380 with size: 0.000183 MiB 00:06:11.973 element at address: 0x200027e6d440 with size: 0.000183 MiB 00:06:11.973 element at address: 0x200027e6d500 with size: 0.000183 MiB 00:06:11.973 element at address: 0x200027e6d5c0 with size: 0.000183 MiB 00:06:11.973 element at address: 0x200027e6d680 with size: 0.000183 MiB 00:06:11.973 element at address: 0x200027e6d740 with size: 0.000183 MiB 00:06:11.973 element at address: 0x200027e6d800 with size: 0.000183 MiB 00:06:11.973 element at address: 0x200027e6d8c0 with size: 0.000183 MiB 00:06:11.973 element at address: 0x200027e6d980 with size: 0.000183 MiB 00:06:11.973 element at address: 0x200027e6da40 with size: 0.000183 MiB 00:06:11.973 element at address: 0x200027e6db00 with size: 0.000183 MiB 00:06:11.973 element at address: 0x200027e6dbc0 with size: 0.000183 MiB 00:06:11.973 element at address: 0x200027e6dc80 with size: 0.000183 MiB 00:06:11.973 element at address: 0x200027e6dd40 with size: 0.000183 MiB 00:06:11.973 element at address: 0x200027e6de00 with size: 0.000183 MiB 00:06:11.973 element at address: 0x200027e6dec0 with size: 0.000183 MiB 00:06:11.973 element at address: 0x200027e6df80 with size: 0.000183 MiB 00:06:11.973 element at address: 0x200027e6e040 with size: 0.000183 MiB 00:06:11.973 element at address: 0x200027e6e100 with size: 0.000183 MiB 00:06:11.973 element at address: 0x200027e6e1c0 with size: 0.000183 MiB 00:06:11.973 element at address: 0x200027e6e280 with size: 0.000183 MiB 00:06:11.973 element at address: 0x200027e6e340 with size: 0.000183 MiB 00:06:11.973 element at address: 0x200027e6e400 with size: 0.000183 MiB 00:06:11.973 element at address: 0x200027e6e4c0 with size: 0.000183 MiB 00:06:11.973 element at address: 0x200027e6e580 with size: 0.000183 MiB 00:06:11.973 element at address: 0x200027e6e640 with size: 0.000183 MiB 00:06:11.973 element at address: 0x200027e6e700 with size: 0.000183 MiB 00:06:11.973 element at address: 0x200027e6e7c0 with size: 0.000183 MiB 00:06:11.973 element at address: 0x200027e6e880 with size: 0.000183 MiB 00:06:11.973 element at address: 0x200027e6e940 with size: 0.000183 MiB 00:06:11.973 element at address: 0x200027e6ea00 with size: 0.000183 MiB 00:06:11.973 element at address: 0x200027e6eac0 with size: 0.000183 MiB 00:06:11.973 element at address: 0x200027e6eb80 with size: 0.000183 MiB 00:06:11.973 element at address: 0x200027e6ec40 with size: 0.000183 MiB 00:06:11.973 element at address: 0x200027e6ed00 with size: 0.000183 MiB 00:06:11.973 element at address: 0x200027e6edc0 with size: 0.000183 MiB 00:06:11.973 element at address: 0x200027e6ee80 with size: 0.000183 MiB 00:06:11.973 element at address: 0x200027e6ef40 with size: 0.000183 MiB 00:06:11.973 element at address: 0x200027e6f000 with size: 0.000183 MiB 00:06:11.973 element at address: 0x200027e6f0c0 with size: 0.000183 MiB 00:06:11.973 element at address: 0x200027e6f180 with size: 0.000183 MiB 00:06:11.973 element at address: 0x200027e6f240 with size: 0.000183 MiB 00:06:11.973 element at address: 0x200027e6f300 with size: 0.000183 MiB 00:06:11.973 element at address: 0x200027e6f3c0 with size: 0.000183 MiB 00:06:11.973 element at address: 0x200027e6f480 with size: 0.000183 MiB 00:06:11.973 element at address: 0x200027e6f540 with size: 0.000183 MiB 00:06:11.973 element at address: 0x200027e6f600 with size: 0.000183 MiB 00:06:11.973 element at address: 0x200027e6f6c0 with size: 0.000183 MiB 00:06:11.973 element at address: 0x200027e6f780 with size: 0.000183 MiB 00:06:11.973 element at address: 0x200027e6f840 with size: 0.000183 MiB 00:06:11.973 element at address: 0x200027e6f900 with size: 0.000183 MiB 00:06:11.973 element at address: 0x200027e6f9c0 with size: 0.000183 MiB 00:06:11.973 element at address: 0x200027e6fa80 with size: 0.000183 MiB 00:06:11.973 element at address: 0x200027e6fb40 with size: 0.000183 MiB 00:06:11.973 element at address: 0x200027e6fc00 with size: 0.000183 MiB 00:06:11.973 element at address: 0x200027e6fcc0 with size: 0.000183 MiB 00:06:11.973 element at address: 0x200027e6fd80 with size: 0.000183 MiB 00:06:11.973 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:06:11.973 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:06:11.973 list of memzone associated elements. size: 602.262573 MiB 00:06:11.973 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:06:11.973 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:06:11.973 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:06:11.973 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:06:11.973 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:06:11.973 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_74673_0 00:06:11.973 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:06:11.973 associated memzone info: size: 48.002930 MiB name: MP_evtpool_74673_0 00:06:11.973 element at address: 0x200003fff380 with size: 48.003052 MiB 00:06:11.973 associated memzone info: size: 48.002930 MiB name: MP_msgpool_74673_0 00:06:11.973 element at address: 0x2000195be940 with size: 20.255554 MiB 00:06:11.973 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:06:11.973 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:06:11.973 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:06:11.973 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:06:11.973 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_74673 00:06:11.973 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:06:11.973 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_74673 00:06:11.973 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:06:11.973 associated memzone info: size: 1.007996 MiB name: MP_evtpool_74673 00:06:11.973 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:06:11.973 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:06:11.973 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:06:11.973 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:06:11.973 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:06:11.973 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:06:11.973 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:06:11.973 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:06:11.973 element at address: 0x200003eff180 with size: 1.000488 MiB 00:06:11.973 associated memzone info: size: 1.000366 MiB name: RG_ring_0_74673 00:06:11.973 element at address: 0x200003affc00 with size: 1.000488 MiB 00:06:11.973 associated memzone info: size: 1.000366 MiB name: RG_ring_1_74673 00:06:11.973 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:06:11.973 associated memzone info: size: 1.000366 MiB name: RG_ring_4_74673 00:06:11.973 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:06:11.973 associated memzone info: size: 1.000366 MiB name: RG_ring_5_74673 00:06:11.973 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:06:11.973 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_74673 00:06:11.973 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:06:11.973 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:06:11.973 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:06:11.973 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:06:11.973 element at address: 0x20001947c540 with size: 0.250488 MiB 00:06:11.973 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:06:11.973 element at address: 0x200003adf880 with size: 0.125488 MiB 00:06:11.973 associated memzone info: size: 0.125366 MiB name: RG_ring_2_74673 00:06:11.973 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:06:11.973 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:06:11.973 element at address: 0x200027e65740 with size: 0.023743 MiB 00:06:11.973 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:06:11.973 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:06:11.973 associated memzone info: size: 0.015991 MiB name: RG_ring_3_74673 00:06:11.973 element at address: 0x200027e6b880 with size: 0.002441 MiB 00:06:11.973 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:06:11.973 element at address: 0x2000002d6780 with size: 0.000305 MiB 00:06:11.974 associated memzone info: size: 0.000183 MiB name: MP_msgpool_74673 00:06:11.974 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:06:11.974 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_74673 00:06:11.974 element at address: 0x200027e6c340 with size: 0.000305 MiB 00:06:11.974 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:06:11.974 00:10:26 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:06:11.974 00:10:26 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 74673 00:06:11.974 00:10:26 dpdk_mem_utility -- common/autotest_common.sh@946 -- # '[' -z 74673 ']' 00:06:11.974 00:10:26 dpdk_mem_utility -- common/autotest_common.sh@950 -- # kill -0 74673 00:06:11.974 00:10:26 dpdk_mem_utility -- common/autotest_common.sh@951 -- # uname 00:06:11.974 00:10:26 dpdk_mem_utility -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:12.234 00:10:26 dpdk_mem_utility -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 74673 00:06:12.234 killing process with pid 74673 00:06:12.234 00:10:26 dpdk_mem_utility -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:12.234 00:10:26 dpdk_mem_utility -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:12.234 00:10:26 dpdk_mem_utility -- common/autotest_common.sh@964 -- # echo 'killing process with pid 74673' 00:06:12.234 00:10:26 dpdk_mem_utility -- common/autotest_common.sh@965 -- # kill 74673 00:06:12.234 00:10:26 dpdk_mem_utility -- common/autotest_common.sh@970 -- # wait 74673 00:06:12.492 00:06:12.492 real 0m1.532s 00:06:12.492 user 0m1.412s 00:06:12.492 sys 0m0.518s 00:06:12.492 00:10:27 dpdk_mem_utility -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:12.492 00:10:27 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:12.492 ************************************ 00:06:12.492 END TEST dpdk_mem_utility 00:06:12.492 ************************************ 00:06:12.493 00:10:27 -- spdk/autotest.sh@181 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:06:12.493 00:10:27 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:12.493 00:10:27 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:12.493 00:10:27 -- common/autotest_common.sh@10 -- # set +x 00:06:12.493 ************************************ 00:06:12.493 START TEST event 00:06:12.493 ************************************ 00:06:12.493 00:10:27 event -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:06:12.751 * Looking for test storage... 00:06:12.751 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:06:12.751 00:10:27 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:06:12.751 00:10:27 event -- bdev/nbd_common.sh@6 -- # set -e 00:06:12.751 00:10:27 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:12.751 00:10:27 event -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:06:12.751 00:10:27 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:12.751 00:10:27 event -- common/autotest_common.sh@10 -- # set +x 00:06:12.751 ************************************ 00:06:12.751 START TEST event_perf 00:06:12.751 ************************************ 00:06:12.751 00:10:27 event.event_perf -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:12.751 Running I/O for 1 seconds...[2024-07-23 00:10:27.317867] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:06:12.751 [2024-07-23 00:10:27.318103] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74745 ] 00:06:13.010 [2024-07-23 00:10:27.468726] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:13.010 [2024-07-23 00:10:27.523305] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:13.010 [2024-07-23 00:10:27.523483] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:13.010 [2024-07-23 00:10:27.523530] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:13.010 Running I/O for 1 seconds...[2024-07-23 00:10:27.523670] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:13.947 00:06:13.947 lcore 0: 102809 00:06:13.947 lcore 1: 102810 00:06:13.947 lcore 2: 102813 00:06:13.947 lcore 3: 102810 00:06:13.947 done. 00:06:13.947 00:06:13.947 real 0m1.346s 00:06:13.947 user 0m4.092s 00:06:13.947 sys 0m0.132s 00:06:13.947 ************************************ 00:06:13.947 END TEST event_perf 00:06:13.947 ************************************ 00:06:13.947 00:10:28 event.event_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:13.947 00:10:28 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:06:14.214 00:10:28 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:06:14.214 00:10:28 event -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:06:14.214 00:10:28 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:14.214 00:10:28 event -- common/autotest_common.sh@10 -- # set +x 00:06:14.214 ************************************ 00:06:14.214 START TEST event_reactor 00:06:14.214 ************************************ 00:06:14.214 00:10:28 event.event_reactor -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:06:14.214 [2024-07-23 00:10:28.740970] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:06:14.214 [2024-07-23 00:10:28.741115] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74781 ] 00:06:14.496 [2024-07-23 00:10:28.893906] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:14.496 [2024-07-23 00:10:28.942005] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:15.431 test_start 00:06:15.431 oneshot 00:06:15.431 tick 100 00:06:15.431 tick 100 00:06:15.431 tick 250 00:06:15.431 tick 100 00:06:15.431 tick 100 00:06:15.431 tick 100 00:06:15.431 tick 250 00:06:15.431 tick 500 00:06:15.431 tick 100 00:06:15.431 tick 100 00:06:15.431 tick 250 00:06:15.431 tick 100 00:06:15.431 tick 100 00:06:15.431 test_end 00:06:15.431 00:06:15.431 real 0m1.332s 00:06:15.431 user 0m1.126s 00:06:15.431 sys 0m0.099s 00:06:15.431 ************************************ 00:06:15.431 END TEST event_reactor 00:06:15.431 ************************************ 00:06:15.431 00:10:30 event.event_reactor -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:15.431 00:10:30 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:06:15.431 00:10:30 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:15.431 00:10:30 event -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:06:15.431 00:10:30 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:15.431 00:10:30 event -- common/autotest_common.sh@10 -- # set +x 00:06:15.431 ************************************ 00:06:15.431 START TEST event_reactor_perf 00:06:15.431 ************************************ 00:06:15.431 00:10:30 event.event_reactor_perf -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:15.689 [2024-07-23 00:10:30.146356] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:06:15.689 [2024-07-23 00:10:30.146491] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74818 ] 00:06:15.689 [2024-07-23 00:10:30.296469] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:15.689 [2024-07-23 00:10:30.339490] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:17.064 test_start 00:06:17.064 test_end 00:06:17.064 Performance: 366612 events per second 00:06:17.064 00:06:17.064 real 0m1.323s 00:06:17.064 user 0m1.124s 00:06:17.064 sys 0m0.092s 00:06:17.064 00:10:31 event.event_reactor_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:17.064 00:10:31 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:06:17.064 ************************************ 00:06:17.064 END TEST event_reactor_perf 00:06:17.064 ************************************ 00:06:17.064 00:10:31 event -- event/event.sh@49 -- # uname -s 00:06:17.064 00:10:31 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:06:17.064 00:10:31 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:06:17.064 00:10:31 event -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:17.064 00:10:31 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:17.064 00:10:31 event -- common/autotest_common.sh@10 -- # set +x 00:06:17.064 ************************************ 00:06:17.064 START TEST event_scheduler 00:06:17.064 ************************************ 00:06:17.064 00:10:31 event.event_scheduler -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:06:17.064 * Looking for test storage... 00:06:17.064 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:06:17.064 00:10:31 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:06:17.064 00:10:31 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=74880 00:06:17.064 00:10:31 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:06:17.064 00:10:31 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:06:17.064 00:10:31 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 74880 00:06:17.064 00:10:31 event.event_scheduler -- common/autotest_common.sh@827 -- # '[' -z 74880 ']' 00:06:17.064 00:10:31 event.event_scheduler -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:17.064 00:10:31 event.event_scheduler -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:17.064 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:17.064 00:10:31 event.event_scheduler -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:17.064 00:10:31 event.event_scheduler -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:17.064 00:10:31 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:17.064 [2024-07-23 00:10:31.721627] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:06:17.064 [2024-07-23 00:10:31.721749] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74880 ] 00:06:17.323 [2024-07-23 00:10:31.872619] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:17.323 [2024-07-23 00:10:31.918177] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:17.323 [2024-07-23 00:10:31.918403] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:17.323 [2024-07-23 00:10:31.918584] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:17.323 [2024-07-23 00:10:31.918460] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:17.889 00:10:32 event.event_scheduler -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:17.889 00:10:32 event.event_scheduler -- common/autotest_common.sh@860 -- # return 0 00:06:17.889 00:10:32 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:06:17.889 00:10:32 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:17.889 00:10:32 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:17.889 POWER: Env isn't set yet! 00:06:17.889 POWER: Attempting to initialise ACPI cpufreq power management... 00:06:17.889 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:17.889 POWER: Cannot set governor of lcore 0 to userspace 00:06:17.889 POWER: Attempting to initialise PSTAT power management... 00:06:17.889 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:17.890 POWER: Cannot set governor of lcore 0 to performance 00:06:17.890 POWER: Attempting to initialise CPPC power management... 00:06:17.890 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:17.890 POWER: Cannot set governor of lcore 0 to userspace 00:06:17.890 POWER: Attempting to initialise VM power management... 00:06:17.890 GUEST_CHANNEL: Unable to to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:06:17.890 POWER: Unable to set Power Management Environment for lcore 0 00:06:17.890 [2024-07-23 00:10:32.523339] dpdk_governor.c: 88:_init_core: *ERROR*: Failed to initialize on core0 00:06:17.890 [2024-07-23 00:10:32.523375] dpdk_governor.c: 118:_init: *ERROR*: Failed to initialize on core0 00:06:17.890 [2024-07-23 00:10:32.523389] scheduler_dynamic.c: 238:init: *NOTICE*: Unable to initialize dpdk governor 00:06:17.890 [2024-07-23 00:10:32.523416] scheduler_dynamic.c: 382:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:06:17.890 [2024-07-23 00:10:32.523431] scheduler_dynamic.c: 384:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:06:17.890 [2024-07-23 00:10:32.523441] scheduler_dynamic.c: 386:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:06:17.890 00:10:32 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:17.890 00:10:32 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:06:17.890 00:10:32 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:17.890 00:10:32 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:18.148 [2024-07-23 00:10:32.594604] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:06:18.148 00:10:32 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:18.148 00:10:32 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:06:18.148 00:10:32 event.event_scheduler -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:18.148 00:10:32 event.event_scheduler -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:18.148 00:10:32 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:18.148 ************************************ 00:06:18.148 START TEST scheduler_create_thread 00:06:18.148 ************************************ 00:06:18.148 00:10:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1121 -- # scheduler_create_thread 00:06:18.148 00:10:32 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:06:18.148 00:10:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:18.148 00:10:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:18.148 2 00:06:18.148 00:10:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:18.148 00:10:32 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:06:18.148 00:10:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:18.148 00:10:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:18.148 3 00:06:18.148 00:10:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:18.148 00:10:32 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:06:18.148 00:10:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:18.148 00:10:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:18.148 4 00:06:18.148 00:10:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:18.148 00:10:32 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:06:18.148 00:10:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:18.148 00:10:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:18.148 5 00:06:18.148 00:10:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:18.149 00:10:32 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:06:18.149 00:10:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:18.149 00:10:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:18.149 6 00:06:18.149 00:10:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:18.149 00:10:32 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:06:18.149 00:10:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:18.149 00:10:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:18.149 7 00:06:18.149 00:10:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:18.149 00:10:32 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:06:18.149 00:10:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:18.149 00:10:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:18.149 8 00:06:18.149 00:10:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:18.149 00:10:32 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:06:18.149 00:10:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:18.149 00:10:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:18.149 9 00:06:18.149 00:10:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:18.149 00:10:32 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:06:18.149 00:10:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:18.149 00:10:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:18.149 10 00:06:18.149 00:10:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:18.149 00:10:32 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:06:18.149 00:10:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:18.149 00:10:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:18.715 00:10:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:18.715 00:10:33 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:06:18.715 00:10:33 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:06:18.715 00:10:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:18.715 00:10:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:19.650 00:10:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:19.650 00:10:34 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:06:19.650 00:10:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:19.650 00:10:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:20.585 00:10:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:20.585 00:10:34 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:06:20.585 00:10:34 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:06:20.585 00:10:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:20.585 00:10:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:21.151 ************************************ 00:06:21.151 END TEST scheduler_create_thread 00:06:21.151 ************************************ 00:06:21.151 00:10:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:21.151 00:06:21.151 real 0m3.221s 00:06:21.151 user 0m0.023s 00:06:21.151 sys 0m0.008s 00:06:21.151 00:10:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:21.151 00:10:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:21.410 00:10:35 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:06:21.410 00:10:35 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 74880 00:06:21.410 00:10:35 event.event_scheduler -- common/autotest_common.sh@946 -- # '[' -z 74880 ']' 00:06:21.410 00:10:35 event.event_scheduler -- common/autotest_common.sh@950 -- # kill -0 74880 00:06:21.410 00:10:35 event.event_scheduler -- common/autotest_common.sh@951 -- # uname 00:06:21.410 00:10:35 event.event_scheduler -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:21.410 00:10:35 event.event_scheduler -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 74880 00:06:21.410 killing process with pid 74880 00:06:21.410 00:10:35 event.event_scheduler -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:06:21.410 00:10:35 event.event_scheduler -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:06:21.410 00:10:35 event.event_scheduler -- common/autotest_common.sh@964 -- # echo 'killing process with pid 74880' 00:06:21.410 00:10:35 event.event_scheduler -- common/autotest_common.sh@965 -- # kill 74880 00:06:21.410 00:10:35 event.event_scheduler -- common/autotest_common.sh@970 -- # wait 74880 00:06:21.669 [2024-07-23 00:10:36.209896] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:06:21.928 00:06:21.928 real 0m5.007s 00:06:21.928 user 0m9.753s 00:06:21.928 sys 0m0.472s 00:06:21.928 ************************************ 00:06:21.928 END TEST event_scheduler 00:06:21.928 ************************************ 00:06:21.928 00:10:36 event.event_scheduler -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:21.928 00:10:36 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:21.928 00:10:36 event -- event/event.sh@51 -- # modprobe -n nbd 00:06:21.928 00:10:36 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:06:21.928 00:10:36 event -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:21.928 00:10:36 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:21.928 00:10:36 event -- common/autotest_common.sh@10 -- # set +x 00:06:21.928 ************************************ 00:06:21.928 START TEST app_repeat 00:06:21.928 ************************************ 00:06:21.928 00:10:36 event.app_repeat -- common/autotest_common.sh@1121 -- # app_repeat_test 00:06:21.928 00:10:36 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:21.928 00:10:36 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:21.928 00:10:36 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:06:21.928 00:10:36 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:21.928 00:10:36 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:06:21.928 00:10:36 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:06:21.928 00:10:36 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:06:21.928 Process app_repeat pid: 74981 00:06:21.928 00:10:36 event.app_repeat -- event/event.sh@19 -- # repeat_pid=74981 00:06:21.928 00:10:36 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:06:21.928 00:10:36 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:06:21.928 00:10:36 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 74981' 00:06:21.928 spdk_app_start Round 0 00:06:21.928 00:10:36 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:21.928 00:10:36 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:06:21.928 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:21.928 00:10:36 event.app_repeat -- event/event.sh@25 -- # waitforlisten 74981 /var/tmp/spdk-nbd.sock 00:06:21.928 00:10:36 event.app_repeat -- common/autotest_common.sh@827 -- # '[' -z 74981 ']' 00:06:21.928 00:10:36 event.app_repeat -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:21.928 00:10:36 event.app_repeat -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:21.928 00:10:36 event.app_repeat -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:21.928 00:10:36 event.app_repeat -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:21.928 00:10:36 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:22.187 [2024-07-23 00:10:36.658382] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:06:22.187 [2024-07-23 00:10:36.658641] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74981 ] 00:06:22.187 [2024-07-23 00:10:36.796897] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:22.187 [2024-07-23 00:10:36.838741] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:22.187 [2024-07-23 00:10:36.838839] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:23.157 00:10:37 event.app_repeat -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:23.157 00:10:37 event.app_repeat -- common/autotest_common.sh@860 -- # return 0 00:06:23.157 00:10:37 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:23.157 Malloc0 00:06:23.157 00:10:37 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:23.416 Malloc1 00:06:23.416 00:10:37 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:23.416 00:10:37 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:23.416 00:10:37 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:23.416 00:10:37 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:23.416 00:10:37 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:23.416 00:10:37 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:23.416 00:10:37 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:23.416 00:10:37 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:23.416 00:10:37 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:23.416 00:10:37 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:23.416 00:10:37 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:23.416 00:10:37 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:23.416 00:10:37 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:23.416 00:10:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:23.416 00:10:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:23.416 00:10:37 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:23.416 /dev/nbd0 00:06:23.416 00:10:38 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:23.416 00:10:38 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:23.416 00:10:38 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:06:23.416 00:10:38 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:06:23.416 00:10:38 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:06:23.416 00:10:38 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:06:23.416 00:10:38 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:06:23.416 00:10:38 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:06:23.416 00:10:38 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:06:23.416 00:10:38 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:06:23.416 00:10:38 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:23.416 1+0 records in 00:06:23.416 1+0 records out 00:06:23.416 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00031853 s, 12.9 MB/s 00:06:23.416 00:10:38 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:23.416 00:10:38 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:06:23.416 00:10:38 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:23.417 00:10:38 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:06:23.417 00:10:38 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:06:23.417 00:10:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:23.417 00:10:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:23.417 00:10:38 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:23.675 /dev/nbd1 00:06:23.675 00:10:38 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:23.675 00:10:38 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:23.675 00:10:38 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:06:23.675 00:10:38 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:06:23.675 00:10:38 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:06:23.675 00:10:38 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:06:23.675 00:10:38 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:06:23.675 00:10:38 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:06:23.675 00:10:38 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:06:23.675 00:10:38 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:06:23.675 00:10:38 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:23.675 1+0 records in 00:06:23.675 1+0 records out 00:06:23.675 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000355915 s, 11.5 MB/s 00:06:23.675 00:10:38 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:23.675 00:10:38 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:06:23.675 00:10:38 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:23.675 00:10:38 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:06:23.675 00:10:38 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:06:23.675 00:10:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:23.675 00:10:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:23.675 00:10:38 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:23.675 00:10:38 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:23.675 00:10:38 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:23.934 00:10:38 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:23.934 { 00:06:23.934 "nbd_device": "/dev/nbd0", 00:06:23.934 "bdev_name": "Malloc0" 00:06:23.934 }, 00:06:23.934 { 00:06:23.934 "nbd_device": "/dev/nbd1", 00:06:23.934 "bdev_name": "Malloc1" 00:06:23.934 } 00:06:23.934 ]' 00:06:23.934 00:10:38 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:23.934 00:10:38 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:23.934 { 00:06:23.934 "nbd_device": "/dev/nbd0", 00:06:23.934 "bdev_name": "Malloc0" 00:06:23.934 }, 00:06:23.934 { 00:06:23.934 "nbd_device": "/dev/nbd1", 00:06:23.934 "bdev_name": "Malloc1" 00:06:23.934 } 00:06:23.934 ]' 00:06:23.934 00:10:38 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:23.934 /dev/nbd1' 00:06:23.934 00:10:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:23.934 /dev/nbd1' 00:06:23.934 00:10:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:23.934 00:10:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:23.934 00:10:38 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:23.934 00:10:38 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:23.934 00:10:38 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:23.934 00:10:38 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:23.934 00:10:38 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:23.934 00:10:38 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:23.934 00:10:38 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:23.934 00:10:38 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:23.934 00:10:38 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:23.934 00:10:38 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:23.934 256+0 records in 00:06:23.934 256+0 records out 00:06:23.934 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0120995 s, 86.7 MB/s 00:06:23.934 00:10:38 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:23.934 00:10:38 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:23.934 256+0 records in 00:06:23.934 256+0 records out 00:06:23.934 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0289121 s, 36.3 MB/s 00:06:23.934 00:10:38 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:23.934 00:10:38 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:24.193 256+0 records in 00:06:24.193 256+0 records out 00:06:24.193 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0290337 s, 36.1 MB/s 00:06:24.193 00:10:38 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:24.193 00:10:38 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:24.193 00:10:38 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:24.193 00:10:38 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:24.193 00:10:38 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:24.193 00:10:38 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:24.193 00:10:38 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:24.193 00:10:38 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:24.193 00:10:38 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:24.193 00:10:38 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:24.193 00:10:38 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:24.193 00:10:38 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:24.193 00:10:38 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:24.193 00:10:38 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:24.193 00:10:38 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:24.193 00:10:38 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:24.193 00:10:38 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:24.193 00:10:38 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:24.193 00:10:38 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:24.193 00:10:38 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:24.193 00:10:38 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:24.193 00:10:38 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:24.193 00:10:38 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:24.193 00:10:38 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:24.193 00:10:38 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:24.193 00:10:38 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:24.193 00:10:38 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:24.193 00:10:38 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:24.193 00:10:38 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:24.450 00:10:39 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:24.450 00:10:39 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:24.450 00:10:39 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:24.450 00:10:39 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:24.450 00:10:39 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:24.450 00:10:39 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:24.450 00:10:39 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:24.450 00:10:39 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:24.450 00:10:39 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:24.450 00:10:39 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:24.450 00:10:39 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:24.709 00:10:39 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:24.709 00:10:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:24.709 00:10:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:24.709 00:10:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:24.709 00:10:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:24.709 00:10:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:24.709 00:10:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:24.709 00:10:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:24.709 00:10:39 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:24.709 00:10:39 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:24.709 00:10:39 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:24.709 00:10:39 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:24.709 00:10:39 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:24.967 00:10:39 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:25.226 [2024-07-23 00:10:39.689939] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:25.226 [2024-07-23 00:10:39.728705] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:25.226 [2024-07-23 00:10:39.728710] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:25.226 [2024-07-23 00:10:39.771953] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:25.226 [2024-07-23 00:10:39.772014] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:28.513 spdk_app_start Round 1 00:06:28.513 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:28.513 00:10:42 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:28.513 00:10:42 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:28.513 00:10:42 event.app_repeat -- event/event.sh@25 -- # waitforlisten 74981 /var/tmp/spdk-nbd.sock 00:06:28.513 00:10:42 event.app_repeat -- common/autotest_common.sh@827 -- # '[' -z 74981 ']' 00:06:28.513 00:10:42 event.app_repeat -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:28.513 00:10:42 event.app_repeat -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:28.513 00:10:42 event.app_repeat -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:28.513 00:10:42 event.app_repeat -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:28.513 00:10:42 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:28.513 00:10:42 event.app_repeat -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:28.513 00:10:42 event.app_repeat -- common/autotest_common.sh@860 -- # return 0 00:06:28.513 00:10:42 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:28.513 Malloc0 00:06:28.513 00:10:42 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:28.513 Malloc1 00:06:28.513 00:10:43 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:28.513 00:10:43 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:28.513 00:10:43 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:28.513 00:10:43 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:28.513 00:10:43 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:28.513 00:10:43 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:28.513 00:10:43 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:28.513 00:10:43 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:28.513 00:10:43 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:28.513 00:10:43 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:28.513 00:10:43 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:28.513 00:10:43 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:28.513 00:10:43 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:28.513 00:10:43 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:28.513 00:10:43 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:28.513 00:10:43 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:28.770 /dev/nbd0 00:06:28.770 00:10:43 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:28.770 00:10:43 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:28.770 00:10:43 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:06:28.770 00:10:43 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:06:28.770 00:10:43 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:06:28.770 00:10:43 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:06:28.770 00:10:43 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:06:28.770 00:10:43 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:06:28.770 00:10:43 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:06:28.770 00:10:43 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:06:28.770 00:10:43 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:28.770 1+0 records in 00:06:28.771 1+0 records out 00:06:28.771 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000191694 s, 21.4 MB/s 00:06:28.771 00:10:43 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:28.771 00:10:43 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:06:28.771 00:10:43 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:28.771 00:10:43 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:06:28.771 00:10:43 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:06:28.771 00:10:43 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:28.771 00:10:43 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:28.771 00:10:43 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:29.029 /dev/nbd1 00:06:29.029 00:10:43 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:29.029 00:10:43 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:29.029 00:10:43 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:06:29.029 00:10:43 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:06:29.029 00:10:43 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:06:29.029 00:10:43 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:06:29.029 00:10:43 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:06:29.029 00:10:43 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:06:29.029 00:10:43 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:06:29.029 00:10:43 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:06:29.029 00:10:43 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:29.029 1+0 records in 00:06:29.029 1+0 records out 00:06:29.029 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000271273 s, 15.1 MB/s 00:06:29.029 00:10:43 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:29.029 00:10:43 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:06:29.029 00:10:43 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:29.029 00:10:43 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:06:29.029 00:10:43 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:06:29.029 00:10:43 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:29.029 00:10:43 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:29.029 00:10:43 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:29.029 00:10:43 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:29.029 00:10:43 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:29.288 00:10:43 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:29.288 { 00:06:29.288 "nbd_device": "/dev/nbd0", 00:06:29.288 "bdev_name": "Malloc0" 00:06:29.288 }, 00:06:29.288 { 00:06:29.288 "nbd_device": "/dev/nbd1", 00:06:29.288 "bdev_name": "Malloc1" 00:06:29.288 } 00:06:29.288 ]' 00:06:29.288 00:10:43 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:29.288 { 00:06:29.288 "nbd_device": "/dev/nbd0", 00:06:29.288 "bdev_name": "Malloc0" 00:06:29.288 }, 00:06:29.288 { 00:06:29.288 "nbd_device": "/dev/nbd1", 00:06:29.288 "bdev_name": "Malloc1" 00:06:29.288 } 00:06:29.288 ]' 00:06:29.288 00:10:43 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:29.288 00:10:43 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:29.288 /dev/nbd1' 00:06:29.288 00:10:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:29.288 /dev/nbd1' 00:06:29.288 00:10:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:29.288 00:10:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:29.288 00:10:43 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:29.288 00:10:43 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:29.288 00:10:43 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:29.288 00:10:43 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:29.288 00:10:43 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:29.288 00:10:43 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:29.288 00:10:43 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:29.288 00:10:43 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:29.288 00:10:43 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:29.288 00:10:43 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:29.288 256+0 records in 00:06:29.288 256+0 records out 00:06:29.288 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.014028 s, 74.7 MB/s 00:06:29.288 00:10:43 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:29.288 00:10:43 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:29.288 256+0 records in 00:06:29.288 256+0 records out 00:06:29.288 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.027541 s, 38.1 MB/s 00:06:29.288 00:10:43 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:29.288 00:10:43 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:29.288 256+0 records in 00:06:29.288 256+0 records out 00:06:29.288 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0299301 s, 35.0 MB/s 00:06:29.288 00:10:43 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:29.288 00:10:43 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:29.288 00:10:43 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:29.288 00:10:43 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:29.288 00:10:43 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:29.288 00:10:43 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:29.288 00:10:43 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:29.288 00:10:43 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:29.288 00:10:43 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:29.288 00:10:43 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:29.288 00:10:43 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:29.288 00:10:43 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:29.288 00:10:43 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:29.288 00:10:43 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:29.288 00:10:43 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:29.288 00:10:43 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:29.288 00:10:43 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:29.288 00:10:43 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:29.288 00:10:43 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:29.547 00:10:44 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:29.547 00:10:44 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:29.547 00:10:44 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:29.547 00:10:44 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:29.547 00:10:44 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:29.547 00:10:44 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:29.547 00:10:44 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:29.547 00:10:44 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:29.547 00:10:44 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:29.547 00:10:44 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:29.806 00:10:44 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:29.806 00:10:44 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:29.806 00:10:44 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:29.806 00:10:44 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:29.806 00:10:44 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:29.806 00:10:44 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:29.806 00:10:44 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:29.806 00:10:44 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:29.806 00:10:44 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:29.806 00:10:44 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:29.806 00:10:44 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:30.065 00:10:44 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:30.065 00:10:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:30.065 00:10:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:30.065 00:10:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:30.065 00:10:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:30.065 00:10:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:30.065 00:10:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:30.065 00:10:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:30.065 00:10:44 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:30.065 00:10:44 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:30.065 00:10:44 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:30.065 00:10:44 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:30.065 00:10:44 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:30.324 00:10:44 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:30.324 [2024-07-23 00:10:44.934531] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:30.324 [2024-07-23 00:10:44.975225] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:30.324 [2024-07-23 00:10:44.975253] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:30.585 [2024-07-23 00:10:45.018836] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:30.585 [2024-07-23 00:10:45.018898] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:33.143 spdk_app_start Round 2 00:06:33.143 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:33.143 00:10:47 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:33.143 00:10:47 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:33.143 00:10:47 event.app_repeat -- event/event.sh@25 -- # waitforlisten 74981 /var/tmp/spdk-nbd.sock 00:06:33.143 00:10:47 event.app_repeat -- common/autotest_common.sh@827 -- # '[' -z 74981 ']' 00:06:33.143 00:10:47 event.app_repeat -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:33.143 00:10:47 event.app_repeat -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:33.143 00:10:47 event.app_repeat -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:33.143 00:10:47 event.app_repeat -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:33.143 00:10:47 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:33.403 00:10:47 event.app_repeat -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:33.403 00:10:47 event.app_repeat -- common/autotest_common.sh@860 -- # return 0 00:06:33.403 00:10:47 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:33.662 Malloc0 00:06:33.662 00:10:48 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:33.921 Malloc1 00:06:33.921 00:10:48 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:33.921 00:10:48 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:33.921 00:10:48 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:33.921 00:10:48 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:33.921 00:10:48 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:33.921 00:10:48 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:33.921 00:10:48 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:33.921 00:10:48 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:33.921 00:10:48 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:33.921 00:10:48 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:33.921 00:10:48 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:33.921 00:10:48 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:33.921 00:10:48 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:33.921 00:10:48 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:33.921 00:10:48 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:33.921 00:10:48 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:33.921 /dev/nbd0 00:06:33.921 00:10:48 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:33.921 00:10:48 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:33.921 00:10:48 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:06:33.921 00:10:48 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:06:33.921 00:10:48 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:06:33.921 00:10:48 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:06:33.921 00:10:48 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:06:33.921 00:10:48 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:06:33.921 00:10:48 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:06:33.921 00:10:48 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:06:33.921 00:10:48 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:33.921 1+0 records in 00:06:33.921 1+0 records out 00:06:33.921 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000569544 s, 7.2 MB/s 00:06:33.921 00:10:48 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:33.921 00:10:48 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:06:33.921 00:10:48 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:33.921 00:10:48 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:06:33.921 00:10:48 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:06:33.921 00:10:48 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:33.921 00:10:48 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:33.921 00:10:48 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:34.180 /dev/nbd1 00:06:34.180 00:10:48 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:34.180 00:10:48 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:34.180 00:10:48 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:06:34.180 00:10:48 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:06:34.180 00:10:48 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:06:34.180 00:10:48 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:06:34.180 00:10:48 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:06:34.180 00:10:48 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:06:34.180 00:10:48 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:06:34.180 00:10:48 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:06:34.180 00:10:48 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:34.180 1+0 records in 00:06:34.180 1+0 records out 00:06:34.180 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000437453 s, 9.4 MB/s 00:06:34.180 00:10:48 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:34.180 00:10:48 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:06:34.180 00:10:48 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:34.180 00:10:48 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:06:34.180 00:10:48 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:06:34.180 00:10:48 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:34.180 00:10:48 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:34.180 00:10:48 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:34.180 00:10:48 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:34.180 00:10:48 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:34.439 00:10:48 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:34.439 { 00:06:34.439 "nbd_device": "/dev/nbd0", 00:06:34.439 "bdev_name": "Malloc0" 00:06:34.439 }, 00:06:34.439 { 00:06:34.439 "nbd_device": "/dev/nbd1", 00:06:34.439 "bdev_name": "Malloc1" 00:06:34.439 } 00:06:34.439 ]' 00:06:34.440 00:10:48 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:34.440 { 00:06:34.440 "nbd_device": "/dev/nbd0", 00:06:34.440 "bdev_name": "Malloc0" 00:06:34.440 }, 00:06:34.440 { 00:06:34.440 "nbd_device": "/dev/nbd1", 00:06:34.440 "bdev_name": "Malloc1" 00:06:34.440 } 00:06:34.440 ]' 00:06:34.440 00:10:48 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:34.440 00:10:49 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:34.440 /dev/nbd1' 00:06:34.440 00:10:49 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:34.440 00:10:49 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:34.440 /dev/nbd1' 00:06:34.440 00:10:49 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:34.440 00:10:49 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:34.440 00:10:49 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:34.440 00:10:49 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:34.440 00:10:49 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:34.440 00:10:49 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:34.440 00:10:49 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:34.440 00:10:49 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:34.440 00:10:49 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:34.440 00:10:49 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:34.440 00:10:49 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:34.440 256+0 records in 00:06:34.440 256+0 records out 00:06:34.440 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00493866 s, 212 MB/s 00:06:34.440 00:10:49 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:34.440 00:10:49 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:34.440 256+0 records in 00:06:34.440 256+0 records out 00:06:34.440 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.027901 s, 37.6 MB/s 00:06:34.440 00:10:49 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:34.440 00:10:49 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:34.440 256+0 records in 00:06:34.440 256+0 records out 00:06:34.440 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0286611 s, 36.6 MB/s 00:06:34.440 00:10:49 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:34.440 00:10:49 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:34.440 00:10:49 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:34.440 00:10:49 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:34.440 00:10:49 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:34.440 00:10:49 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:34.440 00:10:49 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:34.440 00:10:49 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:34.440 00:10:49 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:34.699 00:10:49 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:34.699 00:10:49 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:34.699 00:10:49 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:34.699 00:10:49 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:34.699 00:10:49 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:34.699 00:10:49 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:34.699 00:10:49 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:34.699 00:10:49 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:34.699 00:10:49 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:34.699 00:10:49 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:34.699 00:10:49 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:34.699 00:10:49 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:34.699 00:10:49 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:34.699 00:10:49 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:34.699 00:10:49 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:34.699 00:10:49 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:34.699 00:10:49 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:34.699 00:10:49 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:34.699 00:10:49 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:34.699 00:10:49 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:34.959 00:10:49 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:34.959 00:10:49 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:34.959 00:10:49 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:34.959 00:10:49 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:34.959 00:10:49 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:34.959 00:10:49 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:34.959 00:10:49 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:34.959 00:10:49 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:34.959 00:10:49 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:34.959 00:10:49 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:34.959 00:10:49 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:35.218 00:10:49 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:35.218 00:10:49 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:35.218 00:10:49 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:35.218 00:10:49 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:35.218 00:10:49 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:35.218 00:10:49 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:35.218 00:10:49 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:35.218 00:10:49 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:35.218 00:10:49 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:35.218 00:10:49 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:35.218 00:10:49 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:35.218 00:10:49 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:35.218 00:10:49 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:35.476 00:10:49 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:35.476 [2024-07-23 00:10:50.136710] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:35.734 [2024-07-23 00:10:50.175865] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:35.734 [2024-07-23 00:10:50.175868] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:35.734 [2024-07-23 00:10:50.219095] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:35.734 [2024-07-23 00:10:50.219153] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:39.017 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:39.017 00:10:52 event.app_repeat -- event/event.sh@38 -- # waitforlisten 74981 /var/tmp/spdk-nbd.sock 00:06:39.017 00:10:52 event.app_repeat -- common/autotest_common.sh@827 -- # '[' -z 74981 ']' 00:06:39.017 00:10:52 event.app_repeat -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:39.017 00:10:52 event.app_repeat -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:39.017 00:10:52 event.app_repeat -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:39.017 00:10:52 event.app_repeat -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:39.017 00:10:52 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:39.017 00:10:53 event.app_repeat -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:39.017 00:10:53 event.app_repeat -- common/autotest_common.sh@860 -- # return 0 00:06:39.017 00:10:53 event.app_repeat -- event/event.sh@39 -- # killprocess 74981 00:06:39.017 00:10:53 event.app_repeat -- common/autotest_common.sh@946 -- # '[' -z 74981 ']' 00:06:39.017 00:10:53 event.app_repeat -- common/autotest_common.sh@950 -- # kill -0 74981 00:06:39.017 00:10:53 event.app_repeat -- common/autotest_common.sh@951 -- # uname 00:06:39.017 00:10:53 event.app_repeat -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:39.017 00:10:53 event.app_repeat -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 74981 00:06:39.017 killing process with pid 74981 00:06:39.017 00:10:53 event.app_repeat -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:39.017 00:10:53 event.app_repeat -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:39.017 00:10:53 event.app_repeat -- common/autotest_common.sh@964 -- # echo 'killing process with pid 74981' 00:06:39.017 00:10:53 event.app_repeat -- common/autotest_common.sh@965 -- # kill 74981 00:06:39.017 00:10:53 event.app_repeat -- common/autotest_common.sh@970 -- # wait 74981 00:06:39.017 spdk_app_start is called in Round 0. 00:06:39.017 Shutdown signal received, stop current app iteration 00:06:39.017 Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 reinitialization... 00:06:39.017 spdk_app_start is called in Round 1. 00:06:39.017 Shutdown signal received, stop current app iteration 00:06:39.017 Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 reinitialization... 00:06:39.017 spdk_app_start is called in Round 2. 00:06:39.017 Shutdown signal received, stop current app iteration 00:06:39.017 Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 reinitialization... 00:06:39.017 spdk_app_start is called in Round 3. 00:06:39.017 Shutdown signal received, stop current app iteration 00:06:39.017 00:10:53 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:39.017 00:10:53 event.app_repeat -- event/event.sh@42 -- # return 0 00:06:39.017 00:06:39.017 real 0m16.804s 00:06:39.017 user 0m36.456s 00:06:39.017 sys 0m2.717s 00:06:39.018 00:10:53 event.app_repeat -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:39.018 00:10:53 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:39.018 ************************************ 00:06:39.018 END TEST app_repeat 00:06:39.018 ************************************ 00:06:39.018 00:10:53 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:39.018 00:10:53 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:39.018 00:10:53 event -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:39.018 00:10:53 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:39.018 00:10:53 event -- common/autotest_common.sh@10 -- # set +x 00:06:39.018 ************************************ 00:06:39.018 START TEST cpu_locks 00:06:39.018 ************************************ 00:06:39.018 00:10:53 event.cpu_locks -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:39.018 * Looking for test storage... 00:06:39.018 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:06:39.018 00:10:53 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:39.018 00:10:53 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:39.018 00:10:53 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:39.018 00:10:53 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:39.018 00:10:53 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:39.018 00:10:53 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:39.018 00:10:53 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:39.018 ************************************ 00:06:39.018 START TEST default_locks 00:06:39.018 ************************************ 00:06:39.018 00:10:53 event.cpu_locks.default_locks -- common/autotest_common.sh@1121 -- # default_locks 00:06:39.018 00:10:53 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=75403 00:06:39.018 00:10:53 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 75403 00:06:39.018 00:10:53 event.cpu_locks.default_locks -- common/autotest_common.sh@827 -- # '[' -z 75403 ']' 00:06:39.018 00:10:53 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:39.018 00:10:53 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:39.018 00:10:53 event.cpu_locks.default_locks -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:39.018 00:10:53 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:39.018 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:39.018 00:10:53 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:39.018 00:10:53 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:39.277 [2024-07-23 00:10:53.724404] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:06:39.277 [2024-07-23 00:10:53.724540] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75403 ] 00:06:39.277 [2024-07-23 00:10:53.876145] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:39.277 [2024-07-23 00:10:53.918664] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:39.845 00:10:54 event.cpu_locks.default_locks -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:39.845 00:10:54 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # return 0 00:06:39.845 00:10:54 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 75403 00:06:39.845 00:10:54 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 75403 00:06:39.845 00:10:54 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:40.411 00:10:54 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 75403 00:06:40.411 00:10:54 event.cpu_locks.default_locks -- common/autotest_common.sh@946 -- # '[' -z 75403 ']' 00:06:40.411 00:10:54 event.cpu_locks.default_locks -- common/autotest_common.sh@950 -- # kill -0 75403 00:06:40.411 00:10:54 event.cpu_locks.default_locks -- common/autotest_common.sh@951 -- # uname 00:06:40.411 00:10:54 event.cpu_locks.default_locks -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:40.411 00:10:55 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 75403 00:06:40.411 killing process with pid 75403 00:06:40.411 00:10:55 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:40.411 00:10:55 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:40.411 00:10:55 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # echo 'killing process with pid 75403' 00:06:40.411 00:10:55 event.cpu_locks.default_locks -- common/autotest_common.sh@965 -- # kill 75403 00:06:40.411 00:10:55 event.cpu_locks.default_locks -- common/autotest_common.sh@970 -- # wait 75403 00:06:40.979 00:10:55 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 75403 00:06:40.979 00:10:55 event.cpu_locks.default_locks -- common/autotest_common.sh@648 -- # local es=0 00:06:40.979 00:10:55 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 75403 00:06:40.979 00:10:55 event.cpu_locks.default_locks -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:06:40.979 00:10:55 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:40.979 00:10:55 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:06:40.979 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:40.979 00:10:55 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:40.979 00:10:55 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # waitforlisten 75403 00:06:40.979 00:10:55 event.cpu_locks.default_locks -- common/autotest_common.sh@827 -- # '[' -z 75403 ']' 00:06:40.979 00:10:55 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:40.979 00:10:55 event.cpu_locks.default_locks -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:40.979 00:10:55 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:40.979 00:10:55 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:40.979 00:10:55 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:40.979 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 842: kill: (75403) - No such process 00:06:40.979 ERROR: process (pid: 75403) is no longer running 00:06:40.979 00:10:55 event.cpu_locks.default_locks -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:40.979 00:10:55 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # return 1 00:06:40.979 00:10:55 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # es=1 00:06:40.979 00:10:55 event.cpu_locks.default_locks -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:40.979 ************************************ 00:06:40.979 END TEST default_locks 00:06:40.979 ************************************ 00:06:40.979 00:10:55 event.cpu_locks.default_locks -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:40.979 00:10:55 event.cpu_locks.default_locks -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:40.979 00:10:55 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:06:40.979 00:10:55 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:40.979 00:10:55 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:06:40.979 00:10:55 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:40.979 00:06:40.979 real 0m1.796s 00:06:40.979 user 0m1.777s 00:06:40.979 sys 0m0.607s 00:06:40.979 00:10:55 event.cpu_locks.default_locks -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:40.979 00:10:55 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:40.979 00:10:55 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:40.979 00:10:55 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:40.979 00:10:55 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:40.979 00:10:55 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:40.979 ************************************ 00:06:40.979 START TEST default_locks_via_rpc 00:06:40.979 ************************************ 00:06:40.979 00:10:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1121 -- # default_locks_via_rpc 00:06:40.979 00:10:55 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=75451 00:06:40.979 00:10:55 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 75451 00:06:40.979 00:10:55 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:40.979 00:10:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 75451 ']' 00:06:40.980 00:10:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:40.980 00:10:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:40.980 00:10:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:40.980 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:40.980 00:10:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:40.980 00:10:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:40.980 [2024-07-23 00:10:55.585001] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:06:40.980 [2024-07-23 00:10:55.585131] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75451 ] 00:06:41.238 [2024-07-23 00:10:55.734398] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:41.239 [2024-07-23 00:10:55.777129] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:41.834 00:10:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:41.834 00:10:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:06:41.834 00:10:56 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:41.834 00:10:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:41.834 00:10:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:41.834 00:10:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:41.834 00:10:56 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:06:41.834 00:10:56 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:41.834 00:10:56 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:06:41.834 00:10:56 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:41.834 00:10:56 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:41.834 00:10:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:41.834 00:10:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:41.834 00:10:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:41.834 00:10:56 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 75451 00:06:41.834 00:10:56 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 75451 00:06:41.834 00:10:56 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:42.400 00:10:56 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 75451 00:06:42.400 00:10:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@946 -- # '[' -z 75451 ']' 00:06:42.400 00:10:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@950 -- # kill -0 75451 00:06:42.400 00:10:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@951 -- # uname 00:06:42.400 00:10:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:42.400 00:10:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 75451 00:06:42.400 killing process with pid 75451 00:06:42.400 00:10:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:42.400 00:10:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:42.400 00:10:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 75451' 00:06:42.401 00:10:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@965 -- # kill 75451 00:06:42.401 00:10:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@970 -- # wait 75451 00:06:42.660 ************************************ 00:06:42.660 END TEST default_locks_via_rpc 00:06:42.660 ************************************ 00:06:42.660 00:06:42.660 real 0m1.710s 00:06:42.660 user 0m1.693s 00:06:42.660 sys 0m0.593s 00:06:42.660 00:10:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:42.660 00:10:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:42.660 00:10:57 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:42.660 00:10:57 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:42.660 00:10:57 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:42.660 00:10:57 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:42.660 ************************************ 00:06:42.660 START TEST non_locking_app_on_locked_coremask 00:06:42.660 ************************************ 00:06:42.660 00:10:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1121 -- # non_locking_app_on_locked_coremask 00:06:42.660 00:10:57 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=75497 00:06:42.660 00:10:57 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:42.660 00:10:57 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 75497 /var/tmp/spdk.sock 00:06:42.660 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:42.660 00:10:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@827 -- # '[' -z 75497 ']' 00:06:42.660 00:10:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:42.660 00:10:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:42.660 00:10:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:42.660 00:10:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:42.660 00:10:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:42.919 [2024-07-23 00:10:57.381809] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:06:42.919 [2024-07-23 00:10:57.381972] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75497 ] 00:06:42.919 [2024-07-23 00:10:57.533144] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:42.919 [2024-07-23 00:10:57.574334] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.487 00:10:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:43.487 00:10:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # return 0 00:06:43.487 00:10:58 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=75513 00:06:43.487 00:10:58 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 75513 /var/tmp/spdk2.sock 00:06:43.487 00:10:58 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:43.487 00:10:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@827 -- # '[' -z 75513 ']' 00:06:43.487 00:10:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:43.487 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:43.487 00:10:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:43.487 00:10:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:43.487 00:10:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:43.487 00:10:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:43.746 [2024-07-23 00:10:58.248084] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:06:43.746 [2024-07-23 00:10:58.248213] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75513 ] 00:06:43.746 [2024-07-23 00:10:58.391249] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:43.746 [2024-07-23 00:10:58.395323] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:44.005 [2024-07-23 00:10:58.476973] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:44.572 00:10:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:44.572 00:10:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # return 0 00:06:44.572 00:10:59 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 75497 00:06:44.572 00:10:59 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 75497 00:06:44.572 00:10:59 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:45.508 00:10:59 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 75497 00:06:45.508 00:10:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@946 -- # '[' -z 75497 ']' 00:06:45.508 00:10:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # kill -0 75497 00:06:45.508 00:10:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # uname 00:06:45.508 00:10:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:45.508 00:10:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 75497 00:06:45.508 killing process with pid 75497 00:06:45.508 00:10:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:45.508 00:10:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:45.508 00:10:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 75497' 00:06:45.508 00:10:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@965 -- # kill 75497 00:06:45.508 00:10:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # wait 75497 00:06:46.075 00:11:00 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 75513 00:06:46.075 00:11:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@946 -- # '[' -z 75513 ']' 00:06:46.075 00:11:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # kill -0 75513 00:06:46.075 00:11:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # uname 00:06:46.075 00:11:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:46.075 00:11:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 75513 00:06:46.075 killing process with pid 75513 00:06:46.075 00:11:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:46.075 00:11:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:46.075 00:11:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 75513' 00:06:46.075 00:11:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@965 -- # kill 75513 00:06:46.075 00:11:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # wait 75513 00:06:46.334 00:06:46.334 real 0m3.738s 00:06:46.334 user 0m3.887s 00:06:46.334 sys 0m1.168s 00:06:46.334 00:11:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:46.334 00:11:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:46.334 ************************************ 00:06:46.334 END TEST non_locking_app_on_locked_coremask 00:06:46.334 ************************************ 00:06:46.594 00:11:01 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:46.594 00:11:01 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:46.594 00:11:01 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:46.594 00:11:01 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:46.594 ************************************ 00:06:46.594 START TEST locking_app_on_unlocked_coremask 00:06:46.594 ************************************ 00:06:46.594 00:11:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1121 -- # locking_app_on_unlocked_coremask 00:06:46.594 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:46.594 00:11:01 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=75582 00:06:46.594 00:11:01 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 75582 /var/tmp/spdk.sock 00:06:46.594 00:11:01 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:46.594 00:11:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@827 -- # '[' -z 75582 ']' 00:06:46.594 00:11:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:46.594 00:11:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:46.594 00:11:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:46.594 00:11:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:46.594 00:11:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:46.594 [2024-07-23 00:11:01.179473] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:06:46.594 [2024-07-23 00:11:01.180069] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75582 ] 00:06:46.853 [2024-07-23 00:11:01.333626] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:46.853 [2024-07-23 00:11:01.333834] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:46.853 [2024-07-23 00:11:01.376275] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:47.420 00:11:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:47.420 00:11:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # return 0 00:06:47.420 00:11:01 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=75593 00:06:47.420 00:11:01 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 75593 /var/tmp/spdk2.sock 00:06:47.420 00:11:01 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:47.420 00:11:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@827 -- # '[' -z 75593 ']' 00:06:47.420 00:11:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:47.420 00:11:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:47.420 00:11:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:47.420 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:47.420 00:11:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:47.420 00:11:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:47.420 [2024-07-23 00:11:02.049289] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:06:47.421 [2024-07-23 00:11:02.049554] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75593 ] 00:06:47.679 [2024-07-23 00:11:02.196806] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:47.679 [2024-07-23 00:11:02.283348] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:48.247 00:11:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:48.247 00:11:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # return 0 00:06:48.247 00:11:02 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 75593 00:06:48.247 00:11:02 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 75593 00:06:48.247 00:11:02 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:49.191 00:11:03 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 75582 00:06:49.191 00:11:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@946 -- # '[' -z 75582 ']' 00:06:49.191 00:11:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # kill -0 75582 00:06:49.191 00:11:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@951 -- # uname 00:06:49.191 00:11:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:49.191 00:11:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 75582 00:06:49.191 killing process with pid 75582 00:06:49.191 00:11:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:49.191 00:11:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:49.191 00:11:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 75582' 00:06:49.191 00:11:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@965 -- # kill 75582 00:06:49.191 00:11:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@970 -- # wait 75582 00:06:49.758 00:11:04 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 75593 00:06:49.758 00:11:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@946 -- # '[' -z 75593 ']' 00:06:49.758 00:11:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # kill -0 75593 00:06:49.758 00:11:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@951 -- # uname 00:06:49.758 00:11:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:49.758 00:11:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 75593 00:06:50.017 killing process with pid 75593 00:06:50.017 00:11:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:50.017 00:11:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:50.017 00:11:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 75593' 00:06:50.017 00:11:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@965 -- # kill 75593 00:06:50.017 00:11:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@970 -- # wait 75593 00:06:50.276 00:06:50.276 real 0m3.759s 00:06:50.276 user 0m3.899s 00:06:50.276 sys 0m1.203s 00:06:50.276 00:11:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:50.276 ************************************ 00:06:50.276 END TEST locking_app_on_unlocked_coremask 00:06:50.276 ************************************ 00:06:50.276 00:11:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:50.276 00:11:04 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:50.276 00:11:04 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:50.277 00:11:04 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:50.277 00:11:04 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:50.277 ************************************ 00:06:50.277 START TEST locking_app_on_locked_coremask 00:06:50.277 ************************************ 00:06:50.277 00:11:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1121 -- # locking_app_on_locked_coremask 00:06:50.277 00:11:04 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=75662 00:06:50.277 00:11:04 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 75662 /var/tmp/spdk.sock 00:06:50.277 00:11:04 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:50.277 00:11:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@827 -- # '[' -z 75662 ']' 00:06:50.277 00:11:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:50.277 00:11:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:50.277 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:50.277 00:11:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:50.277 00:11:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:50.277 00:11:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:50.536 [2024-07-23 00:11:05.013820] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:06:50.536 [2024-07-23 00:11:05.014318] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75662 ] 00:06:50.536 [2024-07-23 00:11:05.159589] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:50.536 [2024-07-23 00:11:05.201930] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:51.473 00:11:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:51.473 00:11:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # return 0 00:06:51.473 00:11:05 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=75678 00:06:51.473 00:11:05 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 75678 /var/tmp/spdk2.sock 00:06:51.474 00:11:05 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:51.474 00:11:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@648 -- # local es=0 00:06:51.474 00:11:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 75678 /var/tmp/spdk2.sock 00:06:51.474 00:11:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:06:51.474 00:11:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:51.474 00:11:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:06:51.474 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:51.474 00:11:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:51.474 00:11:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # waitforlisten 75678 /var/tmp/spdk2.sock 00:06:51.474 00:11:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@827 -- # '[' -z 75678 ']' 00:06:51.474 00:11:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:51.474 00:11:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:51.474 00:11:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:51.474 00:11:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:51.474 00:11:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:51.474 [2024-07-23 00:11:05.889041] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:06:51.474 [2024-07-23 00:11:05.889169] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75678 ] 00:06:51.474 [2024-07-23 00:11:06.038386] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 75662 has claimed it. 00:06:51.474 [2024-07-23 00:11:06.038453] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:52.042 ERROR: process (pid: 75678) is no longer running 00:06:52.042 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 842: kill: (75678) - No such process 00:06:52.042 00:11:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:52.042 00:11:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # return 1 00:06:52.042 00:11:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # es=1 00:06:52.042 00:11:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:52.042 00:11:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:52.042 00:11:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:52.042 00:11:06 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 75662 00:06:52.042 00:11:06 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:52.042 00:11:06 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 75662 00:06:52.301 00:11:06 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 75662 00:06:52.301 00:11:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@946 -- # '[' -z 75662 ']' 00:06:52.301 00:11:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # kill -0 75662 00:06:52.301 00:11:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # uname 00:06:52.301 00:11:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:52.301 00:11:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 75662 00:06:52.301 killing process with pid 75662 00:06:52.301 00:11:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:52.301 00:11:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:52.301 00:11:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 75662' 00:06:52.301 00:11:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@965 -- # kill 75662 00:06:52.301 00:11:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # wait 75662 00:06:52.869 ************************************ 00:06:52.869 END TEST locking_app_on_locked_coremask 00:06:52.869 ************************************ 00:06:52.869 00:06:52.869 real 0m2.393s 00:06:52.869 user 0m2.521s 00:06:52.869 sys 0m0.745s 00:06:52.869 00:11:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:52.869 00:11:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:52.869 00:11:07 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:52.869 00:11:07 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:52.869 00:11:07 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:52.869 00:11:07 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:52.869 ************************************ 00:06:52.869 START TEST locking_overlapped_coremask 00:06:52.869 ************************************ 00:06:52.869 00:11:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1121 -- # locking_overlapped_coremask 00:06:52.869 00:11:07 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=75724 00:06:52.869 00:11:07 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 75724 /var/tmp/spdk.sock 00:06:52.869 00:11:07 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:06:52.869 00:11:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@827 -- # '[' -z 75724 ']' 00:06:52.869 00:11:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:52.869 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:52.869 00:11:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:52.869 00:11:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:52.870 00:11:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:52.870 00:11:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:52.870 [2024-07-23 00:11:07.477203] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:06:52.870 [2024-07-23 00:11:07.477340] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75724 ] 00:06:53.128 [2024-07-23 00:11:07.628165] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:53.128 [2024-07-23 00:11:07.672981] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:53.128 [2024-07-23 00:11:07.673092] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:53.128 [2024-07-23 00:11:07.673011] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:53.694 00:11:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:53.695 00:11:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # return 0 00:06:53.695 00:11:08 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=75738 00:06:53.695 00:11:08 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 75738 /var/tmp/spdk2.sock 00:06:53.695 00:11:08 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:53.695 00:11:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@648 -- # local es=0 00:06:53.695 00:11:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 75738 /var/tmp/spdk2.sock 00:06:53.695 00:11:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:06:53.695 00:11:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:53.695 00:11:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:06:53.695 00:11:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:53.695 00:11:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # waitforlisten 75738 /var/tmp/spdk2.sock 00:06:53.695 00:11:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@827 -- # '[' -z 75738 ']' 00:06:53.695 00:11:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:53.695 00:11:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:53.695 00:11:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:53.695 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:53.695 00:11:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:53.695 00:11:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:53.695 [2024-07-23 00:11:08.369683] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:06:53.695 [2024-07-23 00:11:08.369811] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75738 ] 00:06:53.953 [2024-07-23 00:11:08.519566] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 75724 has claimed it. 00:06:53.953 [2024-07-23 00:11:08.519639] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:54.522 ERROR: process (pid: 75738) is no longer running 00:06:54.523 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 842: kill: (75738) - No such process 00:06:54.523 00:11:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:54.523 00:11:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # return 1 00:06:54.523 00:11:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # es=1 00:06:54.523 00:11:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:54.523 00:11:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:54.523 00:11:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:54.523 00:11:08 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:54.523 00:11:08 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:54.523 00:11:08 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:54.523 00:11:08 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:54.523 00:11:08 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 75724 00:06:54.523 00:11:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@946 -- # '[' -z 75724 ']' 00:06:54.523 00:11:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@950 -- # kill -0 75724 00:06:54.523 00:11:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@951 -- # uname 00:06:54.523 00:11:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:54.523 00:11:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 75724 00:06:54.523 00:11:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:54.523 00:11:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:54.523 00:11:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 75724' 00:06:54.523 killing process with pid 75724 00:06:54.523 00:11:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@965 -- # kill 75724 00:06:54.523 00:11:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@970 -- # wait 75724 00:06:54.782 00:06:54.782 real 0m2.017s 00:06:54.782 user 0m5.251s 00:06:54.782 sys 0m0.569s 00:06:54.782 00:11:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:54.782 00:11:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:54.782 ************************************ 00:06:54.782 END TEST locking_overlapped_coremask 00:06:54.782 ************************************ 00:06:54.782 00:11:09 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:54.782 00:11:09 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:54.782 00:11:09 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:54.782 00:11:09 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:54.782 ************************************ 00:06:54.782 START TEST locking_overlapped_coremask_via_rpc 00:06:54.782 ************************************ 00:06:54.782 00:11:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1121 -- # locking_overlapped_coremask_via_rpc 00:06:55.041 00:11:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:55.041 00:11:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=75784 00:06:55.041 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:55.041 00:11:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 75784 /var/tmp/spdk.sock 00:06:55.041 00:11:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 75784 ']' 00:06:55.041 00:11:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:55.041 00:11:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:55.041 00:11:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:55.041 00:11:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:55.041 00:11:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:55.041 [2024-07-23 00:11:09.559115] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:06:55.041 [2024-07-23 00:11:09.559238] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75784 ] 00:06:55.041 [2024-07-23 00:11:09.712464] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:55.041 [2024-07-23 00:11:09.712537] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:55.300 [2024-07-23 00:11:09.757704] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:55.300 [2024-07-23 00:11:09.757769] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:55.300 [2024-07-23 00:11:09.757846] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:55.867 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:55.867 00:11:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:55.867 00:11:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:06:55.867 00:11:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:55.867 00:11:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=75798 00:06:55.867 00:11:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 75798 /var/tmp/spdk2.sock 00:06:55.867 00:11:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 75798 ']' 00:06:55.867 00:11:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:55.867 00:11:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:55.867 00:11:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:55.867 00:11:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:55.867 00:11:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:55.867 [2024-07-23 00:11:10.417985] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:06:55.868 [2024-07-23 00:11:10.418342] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75798 ] 00:06:56.126 [2024-07-23 00:11:10.571082] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:56.126 [2024-07-23 00:11:10.571130] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:56.126 [2024-07-23 00:11:10.667453] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:56.126 [2024-07-23 00:11:10.667526] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:56.126 [2024-07-23 00:11:10.667618] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:06:56.693 00:11:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:56.693 00:11:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:06:56.693 00:11:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:56.693 00:11:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:56.693 00:11:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:56.693 00:11:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:56.693 00:11:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:56.693 00:11:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@648 -- # local es=0 00:06:56.693 00:11:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:56.693 00:11:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:06:56.693 00:11:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:56.693 00:11:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:06:56.693 00:11:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:56.693 00:11:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:56.693 00:11:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:56.693 00:11:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:56.693 [2024-07-23 00:11:11.242466] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 75784 has claimed it. 00:06:56.693 request: 00:06:56.693 { 00:06:56.693 "method": "framework_enable_cpumask_locks", 00:06:56.693 "req_id": 1 00:06:56.693 } 00:06:56.693 Got JSON-RPC error response 00:06:56.693 response: 00:06:56.693 { 00:06:56.693 "code": -32603, 00:06:56.693 "message": "Failed to claim CPU core: 2" 00:06:56.693 } 00:06:56.693 00:11:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:06:56.693 00:11:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # es=1 00:06:56.693 00:11:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:56.693 00:11:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:56.693 00:11:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:56.693 00:11:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 75784 /var/tmp/spdk.sock 00:06:56.693 00:11:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 75784 ']' 00:06:56.693 00:11:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:56.693 00:11:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:56.693 00:11:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:56.693 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:56.693 00:11:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:56.693 00:11:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:56.951 00:11:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:56.951 00:11:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:06:56.951 00:11:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 75798 /var/tmp/spdk2.sock 00:06:56.951 00:11:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 75798 ']' 00:06:56.951 00:11:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:56.951 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:56.951 00:11:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:56.951 00:11:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:56.951 00:11:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:56.951 00:11:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:57.209 00:11:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:57.210 00:11:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:06:57.210 00:11:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:57.210 ************************************ 00:06:57.210 END TEST locking_overlapped_coremask_via_rpc 00:06:57.210 ************************************ 00:06:57.210 00:11:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:57.210 00:11:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:57.210 00:11:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:57.210 00:06:57.210 real 0m2.181s 00:06:57.210 user 0m0.916s 00:06:57.210 sys 0m0.196s 00:06:57.210 00:11:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:57.210 00:11:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:57.210 00:11:11 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:57.210 00:11:11 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 75784 ]] 00:06:57.210 00:11:11 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 75784 00:06:57.210 00:11:11 event.cpu_locks -- common/autotest_common.sh@946 -- # '[' -z 75784 ']' 00:06:57.210 00:11:11 event.cpu_locks -- common/autotest_common.sh@950 -- # kill -0 75784 00:06:57.210 00:11:11 event.cpu_locks -- common/autotest_common.sh@951 -- # uname 00:06:57.210 00:11:11 event.cpu_locks -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:57.210 00:11:11 event.cpu_locks -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 75784 00:06:57.210 killing process with pid 75784 00:06:57.210 00:11:11 event.cpu_locks -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:57.210 00:11:11 event.cpu_locks -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:57.210 00:11:11 event.cpu_locks -- common/autotest_common.sh@964 -- # echo 'killing process with pid 75784' 00:06:57.210 00:11:11 event.cpu_locks -- common/autotest_common.sh@965 -- # kill 75784 00:06:57.210 00:11:11 event.cpu_locks -- common/autotest_common.sh@970 -- # wait 75784 00:06:57.468 00:11:12 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 75798 ]] 00:06:57.468 00:11:12 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 75798 00:06:57.468 00:11:12 event.cpu_locks -- common/autotest_common.sh@946 -- # '[' -z 75798 ']' 00:06:57.468 00:11:12 event.cpu_locks -- common/autotest_common.sh@950 -- # kill -0 75798 00:06:57.468 00:11:12 event.cpu_locks -- common/autotest_common.sh@951 -- # uname 00:06:57.468 00:11:12 event.cpu_locks -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:57.468 00:11:12 event.cpu_locks -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 75798 00:06:57.727 00:11:12 event.cpu_locks -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:06:57.727 00:11:12 event.cpu_locks -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:06:57.727 00:11:12 event.cpu_locks -- common/autotest_common.sh@964 -- # echo 'killing process with pid 75798' 00:06:57.727 killing process with pid 75798 00:06:57.727 00:11:12 event.cpu_locks -- common/autotest_common.sh@965 -- # kill 75798 00:06:57.727 00:11:12 event.cpu_locks -- common/autotest_common.sh@970 -- # wait 75798 00:06:57.986 00:11:12 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:57.986 00:11:12 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:57.986 00:11:12 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 75784 ]] 00:06:57.986 00:11:12 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 75784 00:06:57.986 00:11:12 event.cpu_locks -- common/autotest_common.sh@946 -- # '[' -z 75784 ']' 00:06:57.986 00:11:12 event.cpu_locks -- common/autotest_common.sh@950 -- # kill -0 75784 00:06:57.986 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 950: kill: (75784) - No such process 00:06:57.986 Process with pid 75784 is not found 00:06:57.986 Process with pid 75798 is not found 00:06:57.986 00:11:12 event.cpu_locks -- common/autotest_common.sh@973 -- # echo 'Process with pid 75784 is not found' 00:06:57.986 00:11:12 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 75798 ]] 00:06:57.986 00:11:12 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 75798 00:06:57.986 00:11:12 event.cpu_locks -- common/autotest_common.sh@946 -- # '[' -z 75798 ']' 00:06:57.986 00:11:12 event.cpu_locks -- common/autotest_common.sh@950 -- # kill -0 75798 00:06:57.986 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 950: kill: (75798) - No such process 00:06:57.986 00:11:12 event.cpu_locks -- common/autotest_common.sh@973 -- # echo 'Process with pid 75798 is not found' 00:06:57.986 00:11:12 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:57.986 00:06:57.986 real 0m19.078s 00:06:57.986 user 0m30.636s 00:06:57.986 sys 0m6.182s 00:06:57.986 00:11:12 event.cpu_locks -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:57.986 00:11:12 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:57.986 ************************************ 00:06:57.986 END TEST cpu_locks 00:06:57.986 ************************************ 00:06:57.986 ************************************ 00:06:57.986 END TEST event 00:06:57.986 ************************************ 00:06:57.986 00:06:57.986 real 0m45.481s 00:06:57.986 user 1m23.370s 00:06:57.986 sys 0m10.089s 00:06:57.986 00:11:12 event -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:57.986 00:11:12 event -- common/autotest_common.sh@10 -- # set +x 00:06:58.244 00:11:12 -- spdk/autotest.sh@182 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:58.244 00:11:12 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:58.244 00:11:12 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:58.245 00:11:12 -- common/autotest_common.sh@10 -- # set +x 00:06:58.245 ************************************ 00:06:58.245 START TEST thread 00:06:58.245 ************************************ 00:06:58.245 00:11:12 thread -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:58.245 * Looking for test storage... 00:06:58.245 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:06:58.245 00:11:12 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:58.245 00:11:12 thread -- common/autotest_common.sh@1097 -- # '[' 8 -le 1 ']' 00:06:58.245 00:11:12 thread -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:58.245 00:11:12 thread -- common/autotest_common.sh@10 -- # set +x 00:06:58.245 ************************************ 00:06:58.245 START TEST thread_poller_perf 00:06:58.245 ************************************ 00:06:58.245 00:11:12 thread.thread_poller_perf -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:58.245 [2024-07-23 00:11:12.861456] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:06:58.245 [2024-07-23 00:11:12.861599] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75922 ] 00:06:58.607 [2024-07-23 00:11:13.011607] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:58.607 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:58.607 [2024-07-23 00:11:13.055076] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:59.547 ====================================== 00:06:59.547 busy:2504028410 (cyc) 00:06:59.547 total_run_count: 395000 00:06:59.547 tsc_hz: 2490000000 (cyc) 00:06:59.547 ====================================== 00:06:59.547 poller_cost: 6339 (cyc), 2545 (nsec) 00:06:59.547 00:06:59.547 real 0m1.331s 00:06:59.547 user 0m1.129s 00:06:59.547 sys 0m0.096s 00:06:59.547 ************************************ 00:06:59.547 END TEST thread_poller_perf 00:06:59.547 ************************************ 00:06:59.547 00:11:14 thread.thread_poller_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:59.547 00:11:14 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:59.547 00:11:14 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:59.547 00:11:14 thread -- common/autotest_common.sh@1097 -- # '[' 8 -le 1 ']' 00:06:59.547 00:11:14 thread -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:59.547 00:11:14 thread -- common/autotest_common.sh@10 -- # set +x 00:06:59.547 ************************************ 00:06:59.547 START TEST thread_poller_perf 00:06:59.547 ************************************ 00:06:59.547 00:11:14 thread.thread_poller_perf -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:59.806 [2024-07-23 00:11:14.255823] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:06:59.806 [2024-07-23 00:11:14.255959] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75959 ] 00:06:59.806 [2024-07-23 00:11:14.405114] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:59.806 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:59.806 [2024-07-23 00:11:14.448211] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:01.182 ====================================== 00:07:01.182 busy:2494050002 (cyc) 00:07:01.182 total_run_count: 5195000 00:07:01.182 tsc_hz: 2490000000 (cyc) 00:07:01.182 ====================================== 00:07:01.182 poller_cost: 480 (cyc), 192 (nsec) 00:07:01.182 00:07:01.182 real 0m1.322s 00:07:01.182 user 0m1.116s 00:07:01.182 sys 0m0.101s 00:07:01.182 00:11:15 thread.thread_poller_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:01.182 00:11:15 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:01.182 ************************************ 00:07:01.182 END TEST thread_poller_perf 00:07:01.182 ************************************ 00:07:01.182 00:11:15 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:07:01.182 00:07:01.182 real 0m2.915s 00:07:01.182 user 0m2.345s 00:07:01.182 sys 0m0.364s 00:07:01.182 00:11:15 thread -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:01.182 00:11:15 thread -- common/autotest_common.sh@10 -- # set +x 00:07:01.182 ************************************ 00:07:01.182 END TEST thread 00:07:01.182 ************************************ 00:07:01.182 00:11:15 -- spdk/autotest.sh@183 -- # run_test accel /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:07:01.182 00:11:15 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:01.182 00:11:15 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:01.182 00:11:15 -- common/autotest_common.sh@10 -- # set +x 00:07:01.182 ************************************ 00:07:01.182 START TEST accel 00:07:01.182 ************************************ 00:07:01.182 00:11:15 accel -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:07:01.182 * Looking for test storage... 00:07:01.182 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:07:01.182 00:11:15 accel -- accel/accel.sh@81 -- # declare -A expected_opcs 00:07:01.182 00:11:15 accel -- accel/accel.sh@82 -- # get_expected_opcs 00:07:01.182 00:11:15 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:01.182 00:11:15 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=76029 00:07:01.182 00:11:15 accel -- accel/accel.sh@63 -- # waitforlisten 76029 00:07:01.182 00:11:15 accel -- common/autotest_common.sh@827 -- # '[' -z 76029 ']' 00:07:01.182 00:11:15 accel -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:01.182 00:11:15 accel -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:01.182 00:11:15 accel -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:01.182 00:11:15 accel -- accel/accel.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:07:01.182 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:01.182 00:11:15 accel -- accel/accel.sh@61 -- # build_accel_config 00:07:01.182 00:11:15 accel -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:01.182 00:11:15 accel -- common/autotest_common.sh@10 -- # set +x 00:07:01.182 00:11:15 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:01.182 00:11:15 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:01.182 00:11:15 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:01.182 00:11:15 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:01.182 00:11:15 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:01.182 00:11:15 accel -- accel/accel.sh@40 -- # local IFS=, 00:07:01.182 00:11:15 accel -- accel/accel.sh@41 -- # jq -r . 00:07:01.441 [2024-07-23 00:11:15.891176] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:07:01.441 [2024-07-23 00:11:15.891321] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76029 ] 00:07:01.441 [2024-07-23 00:11:16.042932] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:01.441 [2024-07-23 00:11:16.087364] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:02.009 00:11:16 accel -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:02.009 00:11:16 accel -- common/autotest_common.sh@860 -- # return 0 00:07:02.009 00:11:16 accel -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:07:02.009 00:11:16 accel -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:07:02.009 00:11:16 accel -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:07:02.009 00:11:16 accel -- accel/accel.sh@68 -- # [[ -n '' ]] 00:07:02.009 00:11:16 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:07:02.009 00:11:16 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:07:02.009 00:11:16 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:07:02.009 00:11:16 accel -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:02.009 00:11:16 accel -- common/autotest_common.sh@10 -- # set +x 00:07:02.268 00:11:16 accel -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:02.268 00:11:16 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:02.268 00:11:16 accel -- accel/accel.sh@72 -- # IFS== 00:07:02.268 00:11:16 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:02.268 00:11:16 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:02.268 00:11:16 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:02.268 00:11:16 accel -- accel/accel.sh@72 -- # IFS== 00:07:02.268 00:11:16 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:02.268 00:11:16 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:02.268 00:11:16 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:02.268 00:11:16 accel -- accel/accel.sh@72 -- # IFS== 00:07:02.268 00:11:16 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:02.268 00:11:16 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:02.268 00:11:16 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:02.268 00:11:16 accel -- accel/accel.sh@72 -- # IFS== 00:07:02.268 00:11:16 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:02.268 00:11:16 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:02.268 00:11:16 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:02.268 00:11:16 accel -- accel/accel.sh@72 -- # IFS== 00:07:02.268 00:11:16 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:02.268 00:11:16 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:02.268 00:11:16 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:02.268 00:11:16 accel -- accel/accel.sh@72 -- # IFS== 00:07:02.268 00:11:16 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:02.268 00:11:16 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:02.268 00:11:16 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:02.268 00:11:16 accel -- accel/accel.sh@72 -- # IFS== 00:07:02.268 00:11:16 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:02.268 00:11:16 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:02.268 00:11:16 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:02.268 00:11:16 accel -- accel/accel.sh@72 -- # IFS== 00:07:02.268 00:11:16 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:02.268 00:11:16 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:02.268 00:11:16 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:02.268 00:11:16 accel -- accel/accel.sh@72 -- # IFS== 00:07:02.268 00:11:16 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:02.268 00:11:16 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:02.268 00:11:16 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:02.268 00:11:16 accel -- accel/accel.sh@72 -- # IFS== 00:07:02.268 00:11:16 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:02.268 00:11:16 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:02.268 00:11:16 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:02.268 00:11:16 accel -- accel/accel.sh@72 -- # IFS== 00:07:02.268 00:11:16 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:02.268 00:11:16 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:02.268 00:11:16 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:02.268 00:11:16 accel -- accel/accel.sh@72 -- # IFS== 00:07:02.268 00:11:16 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:02.268 00:11:16 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:02.268 00:11:16 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:02.268 00:11:16 accel -- accel/accel.sh@72 -- # IFS== 00:07:02.268 00:11:16 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:02.268 00:11:16 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:02.268 00:11:16 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:02.268 00:11:16 accel -- accel/accel.sh@72 -- # IFS== 00:07:02.268 00:11:16 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:02.268 00:11:16 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:02.268 00:11:16 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:02.268 00:11:16 accel -- accel/accel.sh@72 -- # IFS== 00:07:02.268 00:11:16 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:02.268 00:11:16 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:02.268 00:11:16 accel -- accel/accel.sh@75 -- # killprocess 76029 00:07:02.268 00:11:16 accel -- common/autotest_common.sh@946 -- # '[' -z 76029 ']' 00:07:02.268 00:11:16 accel -- common/autotest_common.sh@950 -- # kill -0 76029 00:07:02.268 00:11:16 accel -- common/autotest_common.sh@951 -- # uname 00:07:02.268 00:11:16 accel -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:02.268 00:11:16 accel -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 76029 00:07:02.268 00:11:16 accel -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:02.268 killing process with pid 76029 00:07:02.268 00:11:16 accel -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:02.268 00:11:16 accel -- common/autotest_common.sh@964 -- # echo 'killing process with pid 76029' 00:07:02.268 00:11:16 accel -- common/autotest_common.sh@965 -- # kill 76029 00:07:02.268 00:11:16 accel -- common/autotest_common.sh@970 -- # wait 76029 00:07:02.527 00:11:17 accel -- accel/accel.sh@76 -- # trap - ERR 00:07:02.527 00:11:17 accel -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:07:02.527 00:11:17 accel -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:07:02.527 00:11:17 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:02.527 00:11:17 accel -- common/autotest_common.sh@10 -- # set +x 00:07:02.527 00:11:17 accel.accel_help -- common/autotest_common.sh@1121 -- # accel_perf -h 00:07:02.527 00:11:17 accel.accel_help -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:07:02.527 00:11:17 accel.accel_help -- accel/accel.sh@12 -- # build_accel_config 00:07:02.527 00:11:17 accel.accel_help -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:02.527 00:11:17 accel.accel_help -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:02.527 00:11:17 accel.accel_help -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:02.527 00:11:17 accel.accel_help -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:02.527 00:11:17 accel.accel_help -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:02.527 00:11:17 accel.accel_help -- accel/accel.sh@40 -- # local IFS=, 00:07:02.527 00:11:17 accel.accel_help -- accel/accel.sh@41 -- # jq -r . 00:07:02.787 00:11:17 accel.accel_help -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:02.787 00:11:17 accel.accel_help -- common/autotest_common.sh@10 -- # set +x 00:07:02.787 00:11:17 accel -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:07:02.787 00:11:17 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:07:02.787 00:11:17 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:02.787 00:11:17 accel -- common/autotest_common.sh@10 -- # set +x 00:07:02.787 ************************************ 00:07:02.787 START TEST accel_missing_filename 00:07:02.787 ************************************ 00:07:02.787 00:11:17 accel.accel_missing_filename -- common/autotest_common.sh@1121 -- # NOT accel_perf -t 1 -w compress 00:07:02.787 00:11:17 accel.accel_missing_filename -- common/autotest_common.sh@648 -- # local es=0 00:07:02.787 00:11:17 accel.accel_missing_filename -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress 00:07:02.787 00:11:17 accel.accel_missing_filename -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:07:02.787 00:11:17 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:02.787 00:11:17 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # type -t accel_perf 00:07:02.787 00:11:17 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:02.787 00:11:17 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress 00:07:02.787 00:11:17 accel.accel_missing_filename -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:07:02.787 00:11:17 accel.accel_missing_filename -- accel/accel.sh@12 -- # build_accel_config 00:07:02.787 00:11:17 accel.accel_missing_filename -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:02.787 00:11:17 accel.accel_missing_filename -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:02.787 00:11:17 accel.accel_missing_filename -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:02.787 00:11:17 accel.accel_missing_filename -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:02.787 00:11:17 accel.accel_missing_filename -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:02.787 00:11:17 accel.accel_missing_filename -- accel/accel.sh@40 -- # local IFS=, 00:07:02.787 00:11:17 accel.accel_missing_filename -- accel/accel.sh@41 -- # jq -r . 00:07:02.787 [2024-07-23 00:11:17.339887] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:07:02.787 [2024-07-23 00:11:17.340014] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76088 ] 00:07:03.046 [2024-07-23 00:11:17.488769] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:03.046 [2024-07-23 00:11:17.532222] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:03.046 [2024-07-23 00:11:17.577170] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:03.046 [2024-07-23 00:11:17.646939] accel_perf.c:1464:main: *ERROR*: ERROR starting application 00:07:03.046 A filename is required. 00:07:03.305 00:11:17 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # es=234 00:07:03.305 00:11:17 accel.accel_missing_filename -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:03.305 00:11:17 accel.accel_missing_filename -- common/autotest_common.sh@660 -- # es=106 00:07:03.305 00:11:17 accel.accel_missing_filename -- common/autotest_common.sh@661 -- # case "$es" in 00:07:03.305 00:11:17 accel.accel_missing_filename -- common/autotest_common.sh@668 -- # es=1 00:07:03.305 00:11:17 accel.accel_missing_filename -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:03.305 00:07:03.305 real 0m0.456s 00:07:03.305 user 0m0.237s 00:07:03.305 sys 0m0.155s 00:07:03.305 00:11:17 accel.accel_missing_filename -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:03.305 00:11:17 accel.accel_missing_filename -- common/autotest_common.sh@10 -- # set +x 00:07:03.305 ************************************ 00:07:03.305 END TEST accel_missing_filename 00:07:03.305 ************************************ 00:07:03.305 00:11:17 accel -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:03.305 00:11:17 accel -- common/autotest_common.sh@1097 -- # '[' 10 -le 1 ']' 00:07:03.305 00:11:17 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:03.305 00:11:17 accel -- common/autotest_common.sh@10 -- # set +x 00:07:03.305 ************************************ 00:07:03.305 START TEST accel_compress_verify 00:07:03.305 ************************************ 00:07:03.305 00:11:17 accel.accel_compress_verify -- common/autotest_common.sh@1121 -- # NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:03.305 00:11:17 accel.accel_compress_verify -- common/autotest_common.sh@648 -- # local es=0 00:07:03.305 00:11:17 accel.accel_compress_verify -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:03.305 00:11:17 accel.accel_compress_verify -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:07:03.305 00:11:17 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:03.305 00:11:17 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # type -t accel_perf 00:07:03.305 00:11:17 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:03.305 00:11:17 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:03.305 00:11:17 accel.accel_compress_verify -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:03.305 00:11:17 accel.accel_compress_verify -- accel/accel.sh@12 -- # build_accel_config 00:07:03.305 00:11:17 accel.accel_compress_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:03.305 00:11:17 accel.accel_compress_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:03.305 00:11:17 accel.accel_compress_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:03.305 00:11:17 accel.accel_compress_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:03.305 00:11:17 accel.accel_compress_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:03.305 00:11:17 accel.accel_compress_verify -- accel/accel.sh@40 -- # local IFS=, 00:07:03.305 00:11:17 accel.accel_compress_verify -- accel/accel.sh@41 -- # jq -r . 00:07:03.305 [2024-07-23 00:11:17.860008] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:07:03.305 [2024-07-23 00:11:17.860157] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76108 ] 00:07:03.564 [2024-07-23 00:11:18.011491] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:03.564 [2024-07-23 00:11:18.055025] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:03.564 [2024-07-23 00:11:18.100169] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:03.564 [2024-07-23 00:11:18.170040] accel_perf.c:1464:main: *ERROR*: ERROR starting application 00:07:03.823 00:07:03.823 Compression does not support the verify option, aborting. 00:07:03.823 00:11:18 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # es=161 00:07:03.823 00:11:18 accel.accel_compress_verify -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:03.823 00:11:18 accel.accel_compress_verify -- common/autotest_common.sh@660 -- # es=33 00:07:03.823 00:11:18 accel.accel_compress_verify -- common/autotest_common.sh@661 -- # case "$es" in 00:07:03.823 00:11:18 accel.accel_compress_verify -- common/autotest_common.sh@668 -- # es=1 00:07:03.823 00:11:18 accel.accel_compress_verify -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:03.823 00:07:03.823 real 0m0.458s 00:07:03.823 user 0m0.242s 00:07:03.823 sys 0m0.154s 00:07:03.823 00:11:18 accel.accel_compress_verify -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:03.823 ************************************ 00:07:03.823 END TEST accel_compress_verify 00:07:03.823 00:11:18 accel.accel_compress_verify -- common/autotest_common.sh@10 -- # set +x 00:07:03.823 ************************************ 00:07:03.823 00:11:18 accel -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:07:03.823 00:11:18 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:07:03.823 00:11:18 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:03.823 00:11:18 accel -- common/autotest_common.sh@10 -- # set +x 00:07:03.823 ************************************ 00:07:03.823 START TEST accel_wrong_workload 00:07:03.823 ************************************ 00:07:03.823 00:11:18 accel.accel_wrong_workload -- common/autotest_common.sh@1121 -- # NOT accel_perf -t 1 -w foobar 00:07:03.823 00:11:18 accel.accel_wrong_workload -- common/autotest_common.sh@648 -- # local es=0 00:07:03.823 00:11:18 accel.accel_wrong_workload -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:07:03.823 00:11:18 accel.accel_wrong_workload -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:07:03.823 00:11:18 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:03.823 00:11:18 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # type -t accel_perf 00:07:03.823 00:11:18 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:03.823 00:11:18 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w foobar 00:07:03.823 00:11:18 accel.accel_wrong_workload -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:07:03.823 00:11:18 accel.accel_wrong_workload -- accel/accel.sh@12 -- # build_accel_config 00:07:03.823 00:11:18 accel.accel_wrong_workload -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:03.823 00:11:18 accel.accel_wrong_workload -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:03.823 00:11:18 accel.accel_wrong_workload -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:03.823 00:11:18 accel.accel_wrong_workload -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:03.823 00:11:18 accel.accel_wrong_workload -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:03.823 00:11:18 accel.accel_wrong_workload -- accel/accel.sh@40 -- # local IFS=, 00:07:03.823 00:11:18 accel.accel_wrong_workload -- accel/accel.sh@41 -- # jq -r . 00:07:03.823 Unsupported workload type: foobar 00:07:03.823 [2024-07-23 00:11:18.379978] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:07:03.823 accel_perf options: 00:07:03.823 [-h help message] 00:07:03.823 [-q queue depth per core] 00:07:03.823 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:07:03.823 [-T number of threads per core 00:07:03.823 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:07:03.823 [-t time in seconds] 00:07:03.824 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:07:03.824 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:07:03.824 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:07:03.824 [-l for compress/decompress workloads, name of uncompressed input file 00:07:03.824 [-S for crc32c workload, use this seed value (default 0) 00:07:03.824 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:07:03.824 [-f for fill workload, use this BYTE value (default 255) 00:07:03.824 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:07:03.824 [-y verify result if this switch is on] 00:07:03.824 [-a tasks to allocate per core (default: same value as -q)] 00:07:03.824 Can be used to spread operations across a wider range of memory. 00:07:03.824 00:11:18 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # es=1 00:07:03.824 00:11:18 accel.accel_wrong_workload -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:03.824 00:11:18 accel.accel_wrong_workload -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:03.824 00:11:18 accel.accel_wrong_workload -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:03.824 00:07:03.824 real 0m0.069s 00:07:03.824 user 0m0.063s 00:07:03.824 sys 0m0.046s 00:07:03.824 ************************************ 00:07:03.824 END TEST accel_wrong_workload 00:07:03.824 ************************************ 00:07:03.824 00:11:18 accel.accel_wrong_workload -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:03.824 00:11:18 accel.accel_wrong_workload -- common/autotest_common.sh@10 -- # set +x 00:07:03.824 00:11:18 accel -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:07:03.824 00:11:18 accel -- common/autotest_common.sh@1097 -- # '[' 10 -le 1 ']' 00:07:03.824 00:11:18 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:03.824 00:11:18 accel -- common/autotest_common.sh@10 -- # set +x 00:07:03.824 ************************************ 00:07:03.824 START TEST accel_negative_buffers 00:07:03.824 ************************************ 00:07:03.824 00:11:18 accel.accel_negative_buffers -- common/autotest_common.sh@1121 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:07:03.824 00:11:18 accel.accel_negative_buffers -- common/autotest_common.sh@648 -- # local es=0 00:07:03.824 00:11:18 accel.accel_negative_buffers -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:07:03.824 00:11:18 accel.accel_negative_buffers -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:07:03.824 00:11:18 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:03.824 00:11:18 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # type -t accel_perf 00:07:03.824 00:11:18 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:03.824 00:11:18 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w xor -y -x -1 00:07:03.824 00:11:18 accel.accel_negative_buffers -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:07:03.824 00:11:18 accel.accel_negative_buffers -- accel/accel.sh@12 -- # build_accel_config 00:07:03.824 00:11:18 accel.accel_negative_buffers -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:03.824 00:11:18 accel.accel_negative_buffers -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:03.824 00:11:18 accel.accel_negative_buffers -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:03.824 00:11:18 accel.accel_negative_buffers -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:03.824 00:11:18 accel.accel_negative_buffers -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:03.824 00:11:18 accel.accel_negative_buffers -- accel/accel.sh@40 -- # local IFS=, 00:07:03.824 00:11:18 accel.accel_negative_buffers -- accel/accel.sh@41 -- # jq -r . 00:07:04.083 -x option must be non-negative. 00:07:04.083 [2024-07-23 00:11:18.506066] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:07:04.083 accel_perf options: 00:07:04.083 [-h help message] 00:07:04.083 [-q queue depth per core] 00:07:04.083 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:07:04.083 [-T number of threads per core 00:07:04.083 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:07:04.083 [-t time in seconds] 00:07:04.083 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:07:04.083 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:07:04.083 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:07:04.083 [-l for compress/decompress workloads, name of uncompressed input file 00:07:04.083 [-S for crc32c workload, use this seed value (default 0) 00:07:04.083 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:07:04.083 [-f for fill workload, use this BYTE value (default 255) 00:07:04.083 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:07:04.083 [-y verify result if this switch is on] 00:07:04.083 [-a tasks to allocate per core (default: same value as -q)] 00:07:04.083 Can be used to spread operations across a wider range of memory. 00:07:04.083 00:11:18 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # es=1 00:07:04.083 00:11:18 accel.accel_negative_buffers -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:04.083 00:11:18 accel.accel_negative_buffers -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:04.083 00:11:18 accel.accel_negative_buffers -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:04.083 00:07:04.083 real 0m0.080s 00:07:04.083 user 0m0.074s 00:07:04.083 sys 0m0.045s 00:07:04.083 00:11:18 accel.accel_negative_buffers -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:04.083 ************************************ 00:07:04.083 END TEST accel_negative_buffers 00:07:04.083 00:11:18 accel.accel_negative_buffers -- common/autotest_common.sh@10 -- # set +x 00:07:04.083 ************************************ 00:07:04.083 00:11:18 accel -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:07:04.083 00:11:18 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:07:04.083 00:11:18 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:04.083 00:11:18 accel -- common/autotest_common.sh@10 -- # set +x 00:07:04.083 ************************************ 00:07:04.083 START TEST accel_crc32c 00:07:04.083 ************************************ 00:07:04.083 00:11:18 accel.accel_crc32c -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w crc32c -S 32 -y 00:07:04.083 00:11:18 accel.accel_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:07:04.083 00:11:18 accel.accel_crc32c -- accel/accel.sh@17 -- # local accel_module 00:07:04.083 00:11:18 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:04.083 00:11:18 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:04.083 00:11:18 accel.accel_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:07:04.083 00:11:18 accel.accel_crc32c -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:07:04.083 00:11:18 accel.accel_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:07:04.083 00:11:18 accel.accel_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:04.083 00:11:18 accel.accel_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:04.084 00:11:18 accel.accel_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:04.084 00:11:18 accel.accel_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:04.084 00:11:18 accel.accel_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:04.084 00:11:18 accel.accel_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:07:04.084 00:11:18 accel.accel_crc32c -- accel/accel.sh@41 -- # jq -r . 00:07:04.084 [2024-07-23 00:11:18.652427] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:07:04.084 [2024-07-23 00:11:18.652571] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76175 ] 00:07:04.343 [2024-07-23 00:11:18.792092] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:04.343 [2024-07-23 00:11:18.833776] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:04.343 00:11:18 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:04.343 00:11:18 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:04.343 00:11:18 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:04.343 00:11:18 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:04.343 00:11:18 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:04.343 00:11:18 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:04.343 00:11:18 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:04.343 00:11:18 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:04.343 00:11:18 accel.accel_crc32c -- accel/accel.sh@20 -- # val=0x1 00:07:04.343 00:11:18 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:04.343 00:11:18 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:04.343 00:11:18 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:04.343 00:11:18 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:04.343 00:11:18 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:04.343 00:11:18 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:04.343 00:11:18 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:04.343 00:11:18 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:04.343 00:11:18 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:04.343 00:11:18 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:04.343 00:11:18 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:04.343 00:11:18 accel.accel_crc32c -- accel/accel.sh@20 -- # val=crc32c 00:07:04.343 00:11:18 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:04.343 00:11:18 accel.accel_crc32c -- accel/accel.sh@23 -- # accel_opc=crc32c 00:07:04.343 00:11:18 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:04.343 00:11:18 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:04.343 00:11:18 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:07:04.343 00:11:18 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:04.343 00:11:18 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:04.343 00:11:18 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:04.343 00:11:18 accel.accel_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:04.343 00:11:18 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:04.343 00:11:18 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:04.343 00:11:18 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:04.343 00:11:18 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:04.343 00:11:18 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:04.343 00:11:18 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:04.343 00:11:18 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:04.343 00:11:18 accel.accel_crc32c -- accel/accel.sh@20 -- # val=software 00:07:04.343 00:11:18 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:04.343 00:11:18 accel.accel_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:07:04.343 00:11:18 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:04.343 00:11:18 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:04.343 00:11:18 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:07:04.343 00:11:18 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:04.343 00:11:18 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:04.343 00:11:18 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:04.343 00:11:18 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:07:04.343 00:11:18 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:04.343 00:11:18 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:04.343 00:11:18 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:04.343 00:11:18 accel.accel_crc32c -- accel/accel.sh@20 -- # val=1 00:07:04.343 00:11:18 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:04.343 00:11:18 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:04.343 00:11:18 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:04.343 00:11:18 accel.accel_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:07:04.343 00:11:18 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:04.343 00:11:18 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:04.343 00:11:18 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:04.343 00:11:18 accel.accel_crc32c -- accel/accel.sh@20 -- # val=Yes 00:07:04.343 00:11:18 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:04.343 00:11:18 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:04.343 00:11:18 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:04.343 00:11:18 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:04.343 00:11:18 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:04.343 00:11:18 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:04.343 00:11:18 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:04.343 00:11:18 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:04.343 00:11:18 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:04.343 00:11:18 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:04.343 00:11:18 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:05.723 00:11:20 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:05.723 00:11:20 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:05.723 00:11:20 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:05.723 00:11:20 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:05.723 00:11:20 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:05.723 00:11:20 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:05.723 00:11:20 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:05.723 00:11:20 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:05.723 00:11:20 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:05.723 00:11:20 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:05.723 00:11:20 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:05.723 00:11:20 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:05.723 00:11:20 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:05.723 00:11:20 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:05.723 00:11:20 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:05.723 00:11:20 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:05.723 00:11:20 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:05.723 00:11:20 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:05.723 00:11:20 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:05.723 00:11:20 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:05.723 00:11:20 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:05.723 00:11:20 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:05.723 00:11:20 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:05.723 00:11:20 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:05.723 00:11:20 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:05.723 00:11:20 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:07:05.723 00:11:20 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:05.723 00:07:05.723 real 0m1.440s 00:07:05.723 user 0m1.210s 00:07:05.723 sys 0m0.146s 00:07:05.723 00:11:20 accel.accel_crc32c -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:05.723 ************************************ 00:07:05.723 00:11:20 accel.accel_crc32c -- common/autotest_common.sh@10 -- # set +x 00:07:05.723 END TEST accel_crc32c 00:07:05.723 ************************************ 00:07:05.723 00:11:20 accel -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:07:05.723 00:11:20 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:07:05.723 00:11:20 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:05.723 00:11:20 accel -- common/autotest_common.sh@10 -- # set +x 00:07:05.723 ************************************ 00:07:05.723 START TEST accel_crc32c_C2 00:07:05.723 ************************************ 00:07:05.723 00:11:20 accel.accel_crc32c_C2 -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w crc32c -y -C 2 00:07:05.723 00:11:20 accel.accel_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:07:05.723 00:11:20 accel.accel_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:07:05.723 00:11:20 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:05.723 00:11:20 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:05.723 00:11:20 accel.accel_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:07:05.723 00:11:20 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:07:05.723 00:11:20 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:07:05.723 00:11:20 accel.accel_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:05.723 00:11:20 accel.accel_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:05.723 00:11:20 accel.accel_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:05.723 00:11:20 accel.accel_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:05.723 00:11:20 accel.accel_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:05.723 00:11:20 accel.accel_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:07:05.723 00:11:20 accel.accel_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:07:05.723 [2024-07-23 00:11:20.152342] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:07:05.723 [2024-07-23 00:11:20.152493] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76209 ] 00:07:05.723 [2024-07-23 00:11:20.300794] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:05.723 [2024-07-23 00:11:20.345905] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:05.723 00:11:20 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:05.723 00:11:20 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:05.723 00:11:20 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:05.723 00:11:20 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:05.723 00:11:20 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:05.723 00:11:20 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:05.723 00:11:20 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:05.723 00:11:20 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:05.723 00:11:20 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:07:05.723 00:11:20 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:05.723 00:11:20 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:05.723 00:11:20 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:05.723 00:11:20 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:05.723 00:11:20 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:05.723 00:11:20 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:05.723 00:11:20 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:05.723 00:11:20 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:05.723 00:11:20 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:05.723 00:11:20 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:05.723 00:11:20 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:05.723 00:11:20 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=crc32c 00:07:05.723 00:11:20 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:05.723 00:11:20 accel.accel_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:07:05.723 00:11:20 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:05.723 00:11:20 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:05.723 00:11:20 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:07:05.723 00:11:20 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:05.723 00:11:20 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:05.723 00:11:20 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:05.723 00:11:20 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:05.723 00:11:20 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:05.723 00:11:20 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:05.723 00:11:20 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:05.723 00:11:20 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:05.723 00:11:20 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:05.723 00:11:20 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:05.723 00:11:20 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:05.723 00:11:20 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:07:05.723 00:11:20 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:05.723 00:11:20 accel.accel_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:07:05.723 00:11:20 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:05.723 00:11:20 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:05.723 00:11:20 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:07:05.723 00:11:20 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:05.723 00:11:20 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:05.723 00:11:20 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:05.723 00:11:20 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:07:05.723 00:11:20 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:05.723 00:11:20 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:05.723 00:11:20 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:05.723 00:11:20 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:07:05.723 00:11:20 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:05.723 00:11:20 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:05.723 00:11:20 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:05.723 00:11:20 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:05.723 00:11:20 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:05.723 00:11:20 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:05.724 00:11:20 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:05.724 00:11:20 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:07:05.724 00:11:20 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:05.724 00:11:20 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:05.724 00:11:20 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:05.724 00:11:20 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:05.983 00:11:20 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:05.983 00:11:20 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:05.983 00:11:20 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:05.983 00:11:20 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:05.983 00:11:20 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:05.983 00:11:20 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:05.983 00:11:20 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:06.917 00:11:21 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:06.917 00:11:21 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:06.917 00:11:21 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:06.917 00:11:21 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:06.917 00:11:21 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:06.917 00:11:21 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:06.917 00:11:21 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:06.917 00:11:21 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:06.917 00:11:21 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:06.917 00:11:21 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:06.917 00:11:21 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:06.917 00:11:21 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:06.917 00:11:21 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:06.917 00:11:21 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:06.917 00:11:21 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:06.917 00:11:21 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:06.917 00:11:21 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:06.917 00:11:21 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:06.917 00:11:21 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:06.917 00:11:21 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:06.917 00:11:21 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:06.917 00:11:21 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:06.917 00:11:21 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:06.917 00:11:21 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:06.917 00:11:21 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:06.917 00:11:21 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:07:06.917 00:11:21 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:06.917 00:07:06.917 real 0m1.447s 00:07:06.917 user 0m1.211s 00:07:06.917 sys 0m0.152s 00:07:06.917 00:11:21 accel.accel_crc32c_C2 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:06.917 ************************************ 00:07:06.917 00:11:21 accel.accel_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:07:06.917 END TEST accel_crc32c_C2 00:07:06.917 ************************************ 00:07:07.176 00:11:21 accel -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:07:07.176 00:11:21 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:07:07.176 00:11:21 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:07.176 00:11:21 accel -- common/autotest_common.sh@10 -- # set +x 00:07:07.176 ************************************ 00:07:07.176 START TEST accel_copy 00:07:07.176 ************************************ 00:07:07.176 00:11:21 accel.accel_copy -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w copy -y 00:07:07.176 00:11:21 accel.accel_copy -- accel/accel.sh@16 -- # local accel_opc 00:07:07.176 00:11:21 accel.accel_copy -- accel/accel.sh@17 -- # local accel_module 00:07:07.176 00:11:21 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:07.176 00:11:21 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:07.176 00:11:21 accel.accel_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:07:07.176 00:11:21 accel.accel_copy -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:07:07.176 00:11:21 accel.accel_copy -- accel/accel.sh@12 -- # build_accel_config 00:07:07.176 00:11:21 accel.accel_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:07.176 00:11:21 accel.accel_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:07.176 00:11:21 accel.accel_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:07.176 00:11:21 accel.accel_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:07.176 00:11:21 accel.accel_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:07.176 00:11:21 accel.accel_copy -- accel/accel.sh@40 -- # local IFS=, 00:07:07.176 00:11:21 accel.accel_copy -- accel/accel.sh@41 -- # jq -r . 00:07:07.176 [2024-07-23 00:11:21.667128] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:07:07.176 [2024-07-23 00:11:21.667292] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76246 ] 00:07:07.176 [2024-07-23 00:11:21.807050] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:07.176 [2024-07-23 00:11:21.850490] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:07.435 00:11:21 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:07.435 00:11:21 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:07.435 00:11:21 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:07.435 00:11:21 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:07.435 00:11:21 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:07.435 00:11:21 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:07.435 00:11:21 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:07.435 00:11:21 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:07.435 00:11:21 accel.accel_copy -- accel/accel.sh@20 -- # val=0x1 00:07:07.435 00:11:21 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:07.435 00:11:21 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:07.435 00:11:21 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:07.435 00:11:21 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:07.435 00:11:21 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:07.435 00:11:21 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:07.435 00:11:21 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:07.435 00:11:21 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:07.435 00:11:21 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:07.435 00:11:21 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:07.435 00:11:21 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:07.435 00:11:21 accel.accel_copy -- accel/accel.sh@20 -- # val=copy 00:07:07.435 00:11:21 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:07.435 00:11:21 accel.accel_copy -- accel/accel.sh@23 -- # accel_opc=copy 00:07:07.435 00:11:21 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:07.435 00:11:21 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:07.435 00:11:21 accel.accel_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:07.435 00:11:21 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:07.435 00:11:21 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:07.435 00:11:21 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:07.435 00:11:21 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:07.435 00:11:21 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:07.435 00:11:21 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:07.435 00:11:21 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:07.435 00:11:21 accel.accel_copy -- accel/accel.sh@20 -- # val=software 00:07:07.435 00:11:21 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:07.435 00:11:21 accel.accel_copy -- accel/accel.sh@22 -- # accel_module=software 00:07:07.435 00:11:21 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:07.435 00:11:21 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:07.435 00:11:21 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:07:07.435 00:11:21 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:07.435 00:11:21 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:07.435 00:11:21 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:07.435 00:11:21 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:07:07.435 00:11:21 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:07.435 00:11:21 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:07.435 00:11:21 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:07.435 00:11:21 accel.accel_copy -- accel/accel.sh@20 -- # val=1 00:07:07.435 00:11:21 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:07.435 00:11:21 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:07.435 00:11:21 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:07.435 00:11:21 accel.accel_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:07:07.435 00:11:21 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:07.435 00:11:21 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:07.435 00:11:21 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:07.435 00:11:21 accel.accel_copy -- accel/accel.sh@20 -- # val=Yes 00:07:07.435 00:11:21 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:07.435 00:11:21 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:07.435 00:11:21 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:07.435 00:11:21 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:07.435 00:11:21 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:07.435 00:11:21 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:07.435 00:11:21 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:07.435 00:11:21 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:07.435 00:11:21 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:07.435 00:11:21 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:07.435 00:11:21 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:08.371 00:11:23 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:08.371 00:11:23 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:08.371 00:11:23 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:08.371 00:11:23 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:08.371 00:11:23 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:08.371 00:11:23 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:08.371 00:11:23 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:08.371 00:11:23 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:08.371 00:11:23 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:08.371 00:11:23 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:08.371 00:11:23 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:08.371 00:11:23 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:08.371 00:11:23 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:08.371 00:11:23 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:08.371 00:11:23 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:08.371 00:11:23 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:08.371 00:11:23 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:08.371 00:11:23 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:08.371 00:11:23 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:08.371 00:11:23 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:08.371 00:11:23 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:08.371 00:11:23 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:08.371 00:11:23 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:08.371 00:11:23 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:08.630 00:11:23 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:08.630 00:11:23 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n copy ]] 00:07:08.630 00:11:23 accel.accel_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:08.630 00:07:08.630 real 0m1.440s 00:07:08.630 user 0m0.017s 00:07:08.630 sys 0m0.005s 00:07:08.630 00:11:23 accel.accel_copy -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:08.630 00:11:23 accel.accel_copy -- common/autotest_common.sh@10 -- # set +x 00:07:08.630 ************************************ 00:07:08.630 END TEST accel_copy 00:07:08.630 ************************************ 00:07:08.630 00:11:23 accel -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:08.630 00:11:23 accel -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:07:08.630 00:11:23 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:08.630 00:11:23 accel -- common/autotest_common.sh@10 -- # set +x 00:07:08.630 ************************************ 00:07:08.630 START TEST accel_fill 00:07:08.630 ************************************ 00:07:08.630 00:11:23 accel.accel_fill -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:08.630 00:11:23 accel.accel_fill -- accel/accel.sh@16 -- # local accel_opc 00:07:08.630 00:11:23 accel.accel_fill -- accel/accel.sh@17 -- # local accel_module 00:07:08.630 00:11:23 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:08.630 00:11:23 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:08.630 00:11:23 accel.accel_fill -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:08.630 00:11:23 accel.accel_fill -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:08.630 00:11:23 accel.accel_fill -- accel/accel.sh@12 -- # build_accel_config 00:07:08.630 00:11:23 accel.accel_fill -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:08.630 00:11:23 accel.accel_fill -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:08.630 00:11:23 accel.accel_fill -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:08.630 00:11:23 accel.accel_fill -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:08.630 00:11:23 accel.accel_fill -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:08.630 00:11:23 accel.accel_fill -- accel/accel.sh@40 -- # local IFS=, 00:07:08.630 00:11:23 accel.accel_fill -- accel/accel.sh@41 -- # jq -r . 00:07:08.630 [2024-07-23 00:11:23.167791] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:07:08.630 [2024-07-23 00:11:23.167919] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76276 ] 00:07:08.895 [2024-07-23 00:11:23.320594] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:08.895 [2024-07-23 00:11:23.364960] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:08.895 00:11:23 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:08.895 00:11:23 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:08.895 00:11:23 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:08.895 00:11:23 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:08.895 00:11:23 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:08.895 00:11:23 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:08.895 00:11:23 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:08.895 00:11:23 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:08.895 00:11:23 accel.accel_fill -- accel/accel.sh@20 -- # val=0x1 00:07:08.895 00:11:23 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:08.895 00:11:23 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:08.895 00:11:23 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:08.895 00:11:23 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:08.895 00:11:23 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:08.895 00:11:23 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:08.895 00:11:23 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:08.895 00:11:23 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:08.895 00:11:23 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:08.895 00:11:23 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:08.895 00:11:23 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:08.895 00:11:23 accel.accel_fill -- accel/accel.sh@20 -- # val=fill 00:07:08.895 00:11:23 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:08.895 00:11:23 accel.accel_fill -- accel/accel.sh@23 -- # accel_opc=fill 00:07:08.895 00:11:23 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:08.895 00:11:23 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:08.895 00:11:23 accel.accel_fill -- accel/accel.sh@20 -- # val=0x80 00:07:08.895 00:11:23 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:08.895 00:11:23 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:08.895 00:11:23 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:08.895 00:11:23 accel.accel_fill -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:08.895 00:11:23 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:08.895 00:11:23 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:08.895 00:11:23 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:08.895 00:11:23 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:08.895 00:11:23 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:08.895 00:11:23 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:08.895 00:11:23 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:08.895 00:11:23 accel.accel_fill -- accel/accel.sh@20 -- # val=software 00:07:08.895 00:11:23 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:08.895 00:11:23 accel.accel_fill -- accel/accel.sh@22 -- # accel_module=software 00:07:08.895 00:11:23 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:08.895 00:11:23 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:08.895 00:11:23 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:07:08.895 00:11:23 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:08.895 00:11:23 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:08.895 00:11:23 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:08.895 00:11:23 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:07:08.895 00:11:23 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:08.895 00:11:23 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:08.895 00:11:23 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:08.895 00:11:23 accel.accel_fill -- accel/accel.sh@20 -- # val=1 00:07:08.895 00:11:23 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:08.895 00:11:23 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:08.895 00:11:23 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:08.895 00:11:23 accel.accel_fill -- accel/accel.sh@20 -- # val='1 seconds' 00:07:08.895 00:11:23 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:08.895 00:11:23 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:08.895 00:11:23 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:08.895 00:11:23 accel.accel_fill -- accel/accel.sh@20 -- # val=Yes 00:07:08.895 00:11:23 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:08.895 00:11:23 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:08.895 00:11:23 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:08.895 00:11:23 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:08.895 00:11:23 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:08.895 00:11:23 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:08.895 00:11:23 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:08.895 00:11:23 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:08.895 00:11:23 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:08.895 00:11:23 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:08.895 00:11:23 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:10.271 00:11:24 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:10.271 00:11:24 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:10.271 00:11:24 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:10.271 00:11:24 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:10.271 00:11:24 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:10.271 00:11:24 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:10.271 00:11:24 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:10.271 00:11:24 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:10.271 00:11:24 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:10.271 00:11:24 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:10.271 00:11:24 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:10.271 00:11:24 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:10.271 00:11:24 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:10.271 00:11:24 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:10.271 00:11:24 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:10.271 00:11:24 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:10.271 00:11:24 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:10.271 00:11:24 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:10.271 00:11:24 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:10.271 00:11:24 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:10.271 00:11:24 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:10.271 00:11:24 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:10.271 00:11:24 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:10.271 00:11:24 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:10.271 00:11:24 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:10.271 00:11:24 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n fill ]] 00:07:10.271 00:11:24 accel.accel_fill -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:10.271 00:07:10.271 real 0m1.452s 00:07:10.271 user 0m0.019s 00:07:10.271 sys 0m0.005s 00:07:10.271 00:11:24 accel.accel_fill -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:10.271 00:11:24 accel.accel_fill -- common/autotest_common.sh@10 -- # set +x 00:07:10.271 ************************************ 00:07:10.271 END TEST accel_fill 00:07:10.271 ************************************ 00:07:10.271 00:11:24 accel -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:07:10.271 00:11:24 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:07:10.271 00:11:24 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:10.271 00:11:24 accel -- common/autotest_common.sh@10 -- # set +x 00:07:10.271 ************************************ 00:07:10.271 START TEST accel_copy_crc32c 00:07:10.271 ************************************ 00:07:10.271 00:11:24 accel.accel_copy_crc32c -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w copy_crc32c -y 00:07:10.271 00:11:24 accel.accel_copy_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:07:10.271 00:11:24 accel.accel_copy_crc32c -- accel/accel.sh@17 -- # local accel_module 00:07:10.271 00:11:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:10.271 00:11:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:10.271 00:11:24 accel.accel_copy_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:07:10.271 00:11:24 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:07:10.271 00:11:24 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:07:10.271 00:11:24 accel.accel_copy_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:10.271 00:11:24 accel.accel_copy_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:10.271 00:11:24 accel.accel_copy_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:10.271 00:11:24 accel.accel_copy_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:10.271 00:11:24 accel.accel_copy_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:10.271 00:11:24 accel.accel_copy_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:07:10.271 00:11:24 accel.accel_copy_crc32c -- accel/accel.sh@41 -- # jq -r . 00:07:10.271 [2024-07-23 00:11:24.692325] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:07:10.271 [2024-07-23 00:11:24.692478] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76317 ] 00:07:10.271 [2024-07-23 00:11:24.830775] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:10.271 [2024-07-23 00:11:24.874644] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:10.271 00:11:24 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:10.271 00:11:24 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:10.271 00:11:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:10.271 00:11:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:10.271 00:11:24 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:10.271 00:11:24 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:10.271 00:11:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:10.271 00:11:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:10.271 00:11:24 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0x1 00:07:10.271 00:11:24 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:10.271 00:11:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:10.271 00:11:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:10.271 00:11:24 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:10.271 00:11:24 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:10.271 00:11:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:10.271 00:11:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:10.271 00:11:24 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:10.271 00:11:24 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:10.271 00:11:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:10.272 00:11:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:10.272 00:11:24 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=copy_crc32c 00:07:10.272 00:11:24 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:10.272 00:11:24 accel.accel_copy_crc32c -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:07:10.272 00:11:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:10.272 00:11:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:10.272 00:11:24 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0 00:07:10.272 00:11:24 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:10.272 00:11:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:10.272 00:11:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:10.272 00:11:24 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:10.272 00:11:24 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:10.272 00:11:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:10.272 00:11:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:10.272 00:11:24 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:10.272 00:11:24 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:10.272 00:11:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:10.272 00:11:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:10.272 00:11:24 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:10.272 00:11:24 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:10.272 00:11:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:10.272 00:11:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:10.272 00:11:24 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=software 00:07:10.272 00:11:24 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:10.272 00:11:24 accel.accel_copy_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:07:10.272 00:11:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:10.272 00:11:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:10.272 00:11:24 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:07:10.272 00:11:24 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:10.272 00:11:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:10.272 00:11:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:10.272 00:11:24 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:07:10.272 00:11:24 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:10.272 00:11:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:10.272 00:11:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:10.272 00:11:24 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=1 00:07:10.272 00:11:24 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:10.272 00:11:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:10.272 00:11:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:10.272 00:11:24 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:07:10.272 00:11:24 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:10.272 00:11:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:10.272 00:11:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:10.272 00:11:24 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=Yes 00:07:10.272 00:11:24 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:10.272 00:11:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:10.272 00:11:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:10.272 00:11:24 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:10.272 00:11:24 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:10.272 00:11:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:10.272 00:11:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:10.272 00:11:24 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:10.272 00:11:24 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:10.272 00:11:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:10.272 00:11:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:11.681 00:11:26 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:11.681 00:11:26 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:11.681 00:11:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:11.681 00:11:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:11.681 00:11:26 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:11.681 00:11:26 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:11.681 00:11:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:11.681 00:11:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:11.681 00:11:26 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:11.681 00:11:26 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:11.681 00:11:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:11.681 00:11:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:11.681 00:11:26 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:11.681 00:11:26 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:11.681 00:11:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:11.681 00:11:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:11.681 00:11:26 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:11.681 00:11:26 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:11.681 00:11:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:11.681 00:11:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:11.681 00:11:26 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:11.681 00:11:26 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:11.681 00:11:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:11.681 00:11:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:11.681 00:11:26 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:11.681 00:11:26 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:07:11.681 00:11:26 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:11.681 00:07:11.681 real 0m1.439s 00:07:11.681 user 0m0.024s 00:07:11.681 sys 0m0.001s 00:07:11.681 00:11:26 accel.accel_copy_crc32c -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:11.681 ************************************ 00:07:11.681 END TEST accel_copy_crc32c 00:07:11.681 ************************************ 00:07:11.681 00:11:26 accel.accel_copy_crc32c -- common/autotest_common.sh@10 -- # set +x 00:07:11.681 00:11:26 accel -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:07:11.681 00:11:26 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:07:11.681 00:11:26 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:11.681 00:11:26 accel -- common/autotest_common.sh@10 -- # set +x 00:07:11.681 ************************************ 00:07:11.681 START TEST accel_copy_crc32c_C2 00:07:11.681 ************************************ 00:07:11.681 00:11:26 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:07:11.681 00:11:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:07:11.681 00:11:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:07:11.681 00:11:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:11.681 00:11:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:11.681 00:11:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:07:11.681 00:11:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:07:11.681 00:11:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:07:11.681 00:11:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:11.681 00:11:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:11.681 00:11:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:11.681 00:11:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:11.681 00:11:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:11.681 00:11:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:07:11.681 00:11:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:07:11.681 [2024-07-23 00:11:26.195118] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:07:11.681 [2024-07-23 00:11:26.195256] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76347 ] 00:07:11.681 [2024-07-23 00:11:26.345339] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:11.940 [2024-07-23 00:11:26.389149] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:11.940 00:11:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:11.940 00:11:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:11.940 00:11:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:11.940 00:11:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:11.940 00:11:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:11.940 00:11:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:11.940 00:11:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:11.940 00:11:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:11.940 00:11:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:07:11.940 00:11:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:11.940 00:11:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:11.940 00:11:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:11.940 00:11:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:11.940 00:11:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:11.940 00:11:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:11.940 00:11:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:11.940 00:11:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:11.940 00:11:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:11.940 00:11:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:11.940 00:11:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:11.941 00:11:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=copy_crc32c 00:07:11.941 00:11:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:11.941 00:11:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:07:11.941 00:11:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:11.941 00:11:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:11.941 00:11:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:07:11.941 00:11:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:11.941 00:11:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:11.941 00:11:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:11.941 00:11:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:11.941 00:11:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:11.941 00:11:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:11.941 00:11:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:11.941 00:11:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='8192 bytes' 00:07:11.941 00:11:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:11.941 00:11:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:11.941 00:11:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:11.941 00:11:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:11.941 00:11:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:11.941 00:11:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:11.941 00:11:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:11.941 00:11:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:07:11.941 00:11:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:11.941 00:11:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:07:11.941 00:11:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:11.941 00:11:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:11.941 00:11:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:07:11.941 00:11:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:11.941 00:11:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:11.941 00:11:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:11.941 00:11:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:07:11.941 00:11:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:11.941 00:11:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:11.941 00:11:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:11.941 00:11:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:07:11.941 00:11:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:11.941 00:11:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:11.941 00:11:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:11.941 00:11:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:11.941 00:11:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:11.941 00:11:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:11.941 00:11:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:11.941 00:11:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:07:11.941 00:11:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:11.941 00:11:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:11.941 00:11:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:11.941 00:11:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:11.941 00:11:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:11.941 00:11:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:11.941 00:11:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:11.941 00:11:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:11.941 00:11:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:11.941 00:11:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:11.941 00:11:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:13.319 00:11:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:13.319 00:11:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:13.319 00:11:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:13.319 00:11:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:13.319 00:11:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:13.319 00:11:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:13.319 00:11:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:13.319 00:11:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:13.319 00:11:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:13.319 00:11:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:13.319 00:11:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:13.319 00:11:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:13.319 00:11:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:13.319 00:11:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:13.319 00:11:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:13.319 00:11:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:13.319 00:11:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:13.319 00:11:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:13.319 00:11:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:13.319 00:11:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:13.319 00:11:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:13.319 00:11:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:13.319 00:11:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:13.319 00:11:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:13.319 00:11:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:13.319 00:11:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:07:13.319 ************************************ 00:07:13.319 END TEST accel_copy_crc32c_C2 00:07:13.319 ************************************ 00:07:13.319 00:11:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:13.319 00:07:13.319 real 0m1.451s 00:07:13.319 user 0m0.018s 00:07:13.319 sys 0m0.003s 00:07:13.319 00:11:27 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:13.319 00:11:27 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:07:13.319 00:11:27 accel -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:07:13.319 00:11:27 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:07:13.319 00:11:27 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:13.319 00:11:27 accel -- common/autotest_common.sh@10 -- # set +x 00:07:13.319 ************************************ 00:07:13.319 START TEST accel_dualcast 00:07:13.319 ************************************ 00:07:13.319 00:11:27 accel.accel_dualcast -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w dualcast -y 00:07:13.319 00:11:27 accel.accel_dualcast -- accel/accel.sh@16 -- # local accel_opc 00:07:13.319 00:11:27 accel.accel_dualcast -- accel/accel.sh@17 -- # local accel_module 00:07:13.319 00:11:27 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:13.319 00:11:27 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:13.319 00:11:27 accel.accel_dualcast -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:07:13.319 00:11:27 accel.accel_dualcast -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:07:13.319 00:11:27 accel.accel_dualcast -- accel/accel.sh@12 -- # build_accel_config 00:07:13.319 00:11:27 accel.accel_dualcast -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:13.319 00:11:27 accel.accel_dualcast -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:13.319 00:11:27 accel.accel_dualcast -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:13.319 00:11:27 accel.accel_dualcast -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:13.319 00:11:27 accel.accel_dualcast -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:13.319 00:11:27 accel.accel_dualcast -- accel/accel.sh@40 -- # local IFS=, 00:07:13.319 00:11:27 accel.accel_dualcast -- accel/accel.sh@41 -- # jq -r . 00:07:13.319 [2024-07-23 00:11:27.702306] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:07:13.319 [2024-07-23 00:11:27.702453] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76388 ] 00:07:13.319 [2024-07-23 00:11:27.854128] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:13.319 [2024-07-23 00:11:27.895200] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:13.319 00:11:27 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:13.319 00:11:27 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:13.319 00:11:27 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:13.319 00:11:27 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:13.319 00:11:27 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:13.319 00:11:27 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:13.319 00:11:27 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:13.319 00:11:27 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:13.319 00:11:27 accel.accel_dualcast -- accel/accel.sh@20 -- # val=0x1 00:07:13.319 00:11:27 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:13.320 00:11:27 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:13.320 00:11:27 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:13.320 00:11:27 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:13.320 00:11:27 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:13.320 00:11:27 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:13.320 00:11:27 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:13.320 00:11:27 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:13.320 00:11:27 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:13.320 00:11:27 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:13.320 00:11:27 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:13.320 00:11:27 accel.accel_dualcast -- accel/accel.sh@20 -- # val=dualcast 00:07:13.320 00:11:27 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:13.320 00:11:27 accel.accel_dualcast -- accel/accel.sh@23 -- # accel_opc=dualcast 00:07:13.320 00:11:27 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:13.320 00:11:27 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:13.320 00:11:27 accel.accel_dualcast -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:13.320 00:11:27 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:13.320 00:11:27 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:13.320 00:11:27 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:13.320 00:11:27 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:13.320 00:11:27 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:13.320 00:11:27 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:13.320 00:11:27 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:13.320 00:11:27 accel.accel_dualcast -- accel/accel.sh@20 -- # val=software 00:07:13.320 00:11:27 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:13.320 00:11:27 accel.accel_dualcast -- accel/accel.sh@22 -- # accel_module=software 00:07:13.320 00:11:27 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:13.320 00:11:27 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:13.320 00:11:27 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:07:13.320 00:11:27 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:13.320 00:11:27 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:13.320 00:11:27 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:13.320 00:11:27 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:07:13.320 00:11:27 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:13.320 00:11:27 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:13.320 00:11:27 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:13.320 00:11:27 accel.accel_dualcast -- accel/accel.sh@20 -- # val=1 00:07:13.320 00:11:27 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:13.320 00:11:27 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:13.320 00:11:27 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:13.320 00:11:27 accel.accel_dualcast -- accel/accel.sh@20 -- # val='1 seconds' 00:07:13.320 00:11:27 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:13.320 00:11:27 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:13.320 00:11:27 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:13.320 00:11:27 accel.accel_dualcast -- accel/accel.sh@20 -- # val=Yes 00:07:13.320 00:11:27 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:13.320 00:11:27 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:13.320 00:11:27 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:13.320 00:11:27 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:13.320 00:11:27 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:13.320 00:11:27 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:13.320 00:11:27 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:13.320 00:11:27 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:13.320 00:11:27 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:13.320 00:11:27 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:13.320 00:11:27 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:14.696 00:11:29 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:14.696 00:11:29 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:14.696 00:11:29 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:14.696 00:11:29 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:14.696 00:11:29 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:14.696 00:11:29 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:14.696 00:11:29 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:14.696 00:11:29 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:14.696 00:11:29 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:14.696 00:11:29 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:14.696 00:11:29 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:14.696 00:11:29 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:14.696 00:11:29 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:14.696 00:11:29 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:14.696 00:11:29 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:14.696 00:11:29 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:14.696 00:11:29 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:14.696 00:11:29 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:14.696 00:11:29 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:14.696 00:11:29 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:14.696 00:11:29 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:14.696 00:11:29 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:14.696 00:11:29 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:14.696 00:11:29 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:14.696 00:11:29 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:14.696 00:11:29 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:07:14.696 ************************************ 00:07:14.696 END TEST accel_dualcast 00:07:14.696 ************************************ 00:07:14.696 00:11:29 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:14.696 00:07:14.696 real 0m1.446s 00:07:14.696 user 0m0.012s 00:07:14.696 sys 0m0.000s 00:07:14.696 00:11:29 accel.accel_dualcast -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:14.696 00:11:29 accel.accel_dualcast -- common/autotest_common.sh@10 -- # set +x 00:07:14.696 00:11:29 accel -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:07:14.696 00:11:29 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:07:14.696 00:11:29 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:14.696 00:11:29 accel -- common/autotest_common.sh@10 -- # set +x 00:07:14.696 ************************************ 00:07:14.696 START TEST accel_compare 00:07:14.696 ************************************ 00:07:14.696 00:11:29 accel.accel_compare -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w compare -y 00:07:14.696 00:11:29 accel.accel_compare -- accel/accel.sh@16 -- # local accel_opc 00:07:14.696 00:11:29 accel.accel_compare -- accel/accel.sh@17 -- # local accel_module 00:07:14.696 00:11:29 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:14.696 00:11:29 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:14.696 00:11:29 accel.accel_compare -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:07:14.696 00:11:29 accel.accel_compare -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:07:14.696 00:11:29 accel.accel_compare -- accel/accel.sh@12 -- # build_accel_config 00:07:14.696 00:11:29 accel.accel_compare -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:14.696 00:11:29 accel.accel_compare -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:14.696 00:11:29 accel.accel_compare -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:14.696 00:11:29 accel.accel_compare -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:14.696 00:11:29 accel.accel_compare -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:14.696 00:11:29 accel.accel_compare -- accel/accel.sh@40 -- # local IFS=, 00:07:14.696 00:11:29 accel.accel_compare -- accel/accel.sh@41 -- # jq -r . 00:07:14.696 [2024-07-23 00:11:29.212374] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:07:14.696 [2024-07-23 00:11:29.212518] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76418 ] 00:07:14.696 [2024-07-23 00:11:29.361775] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:14.954 [2024-07-23 00:11:29.403645] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:14.954 00:11:29 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:14.954 00:11:29 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:14.954 00:11:29 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:14.954 00:11:29 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:14.954 00:11:29 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:14.954 00:11:29 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:14.954 00:11:29 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:14.954 00:11:29 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:14.954 00:11:29 accel.accel_compare -- accel/accel.sh@20 -- # val=0x1 00:07:14.954 00:11:29 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:14.954 00:11:29 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:14.954 00:11:29 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:14.954 00:11:29 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:14.954 00:11:29 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:14.955 00:11:29 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:14.955 00:11:29 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:14.955 00:11:29 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:14.955 00:11:29 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:14.955 00:11:29 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:14.955 00:11:29 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:14.955 00:11:29 accel.accel_compare -- accel/accel.sh@20 -- # val=compare 00:07:14.955 00:11:29 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:14.955 00:11:29 accel.accel_compare -- accel/accel.sh@23 -- # accel_opc=compare 00:07:14.955 00:11:29 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:14.955 00:11:29 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:14.955 00:11:29 accel.accel_compare -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:14.955 00:11:29 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:14.955 00:11:29 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:14.955 00:11:29 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:14.955 00:11:29 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:14.955 00:11:29 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:14.955 00:11:29 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:14.955 00:11:29 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:14.955 00:11:29 accel.accel_compare -- accel/accel.sh@20 -- # val=software 00:07:14.955 00:11:29 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:14.955 00:11:29 accel.accel_compare -- accel/accel.sh@22 -- # accel_module=software 00:07:14.955 00:11:29 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:14.955 00:11:29 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:14.955 00:11:29 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:07:14.955 00:11:29 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:14.955 00:11:29 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:14.955 00:11:29 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:14.955 00:11:29 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:07:14.955 00:11:29 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:14.955 00:11:29 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:14.955 00:11:29 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:14.955 00:11:29 accel.accel_compare -- accel/accel.sh@20 -- # val=1 00:07:14.955 00:11:29 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:14.955 00:11:29 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:14.955 00:11:29 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:14.955 00:11:29 accel.accel_compare -- accel/accel.sh@20 -- # val='1 seconds' 00:07:14.955 00:11:29 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:14.955 00:11:29 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:14.955 00:11:29 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:14.955 00:11:29 accel.accel_compare -- accel/accel.sh@20 -- # val=Yes 00:07:14.955 00:11:29 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:14.955 00:11:29 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:14.955 00:11:29 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:14.955 00:11:29 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:14.955 00:11:29 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:14.955 00:11:29 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:14.955 00:11:29 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:14.955 00:11:29 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:14.955 00:11:29 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:14.955 00:11:29 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:14.955 00:11:29 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:16.332 00:11:30 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:16.332 00:11:30 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:16.332 00:11:30 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:16.332 00:11:30 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:16.332 00:11:30 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:16.332 00:11:30 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:16.332 00:11:30 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:16.332 00:11:30 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:16.332 00:11:30 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:16.332 00:11:30 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:16.332 00:11:30 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:16.332 00:11:30 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:16.332 00:11:30 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:16.332 00:11:30 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:16.332 00:11:30 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:16.332 00:11:30 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:16.332 00:11:30 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:16.332 00:11:30 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:16.332 00:11:30 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:16.332 00:11:30 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:16.332 00:11:30 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:16.332 00:11:30 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:16.332 00:11:30 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:16.332 00:11:30 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:16.332 00:11:30 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:16.332 00:11:30 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n compare ]] 00:07:16.332 00:11:30 accel.accel_compare -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:16.332 ************************************ 00:07:16.332 END TEST accel_compare 00:07:16.332 ************************************ 00:07:16.332 00:07:16.332 real 0m1.446s 00:07:16.332 user 0m0.010s 00:07:16.332 sys 0m0.001s 00:07:16.332 00:11:30 accel.accel_compare -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:16.332 00:11:30 accel.accel_compare -- common/autotest_common.sh@10 -- # set +x 00:07:16.332 00:11:30 accel -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:07:16.332 00:11:30 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:07:16.332 00:11:30 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:16.332 00:11:30 accel -- common/autotest_common.sh@10 -- # set +x 00:07:16.332 ************************************ 00:07:16.332 START TEST accel_xor 00:07:16.332 ************************************ 00:07:16.332 00:11:30 accel.accel_xor -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w xor -y 00:07:16.332 00:11:30 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:07:16.332 00:11:30 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:07:16.332 00:11:30 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:16.332 00:11:30 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:16.332 00:11:30 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:07:16.332 00:11:30 accel.accel_xor -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:07:16.332 00:11:30 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:07:16.332 00:11:30 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:16.332 00:11:30 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:16.332 00:11:30 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:16.332 00:11:30 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:16.332 00:11:30 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:16.332 00:11:30 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:07:16.332 00:11:30 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:07:16.332 [2024-07-23 00:11:30.729756] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:07:16.332 [2024-07-23 00:11:30.729879] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76454 ] 00:07:16.332 [2024-07-23 00:11:30.872116] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:16.332 [2024-07-23 00:11:30.913769] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:16.332 00:11:30 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:16.332 00:11:30 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:16.332 00:11:30 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:16.332 00:11:30 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:16.332 00:11:30 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:16.332 00:11:30 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:16.332 00:11:30 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:16.332 00:11:30 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:16.332 00:11:30 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:07:16.332 00:11:30 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:16.332 00:11:30 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:16.332 00:11:30 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:16.332 00:11:30 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:16.332 00:11:30 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:16.332 00:11:30 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:16.332 00:11:30 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:16.332 00:11:30 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:16.332 00:11:30 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:16.332 00:11:30 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:16.332 00:11:30 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:16.332 00:11:30 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:07:16.332 00:11:30 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:16.332 00:11:30 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:07:16.332 00:11:30 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:16.332 00:11:30 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:16.332 00:11:30 accel.accel_xor -- accel/accel.sh@20 -- # val=2 00:07:16.332 00:11:30 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:16.332 00:11:30 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:16.332 00:11:30 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:16.332 00:11:30 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:16.332 00:11:30 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:16.332 00:11:30 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:16.332 00:11:30 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:16.332 00:11:30 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:16.332 00:11:30 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:16.332 00:11:30 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:16.332 00:11:30 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:16.332 00:11:30 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:07:16.332 00:11:30 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:16.332 00:11:30 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:07:16.332 00:11:30 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:16.332 00:11:30 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:16.332 00:11:30 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:16.332 00:11:30 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:16.332 00:11:30 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:16.332 00:11:30 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:16.332 00:11:30 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:16.332 00:11:30 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:16.332 00:11:30 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:16.332 00:11:30 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:16.332 00:11:30 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:07:16.332 00:11:30 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:16.332 00:11:30 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:16.332 00:11:30 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:16.332 00:11:30 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:07:16.332 00:11:30 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:16.332 00:11:30 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:16.332 00:11:30 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:16.332 00:11:30 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:07:16.332 00:11:30 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:16.332 00:11:30 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:16.332 00:11:30 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:16.332 00:11:30 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:16.332 00:11:30 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:16.332 00:11:30 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:16.332 00:11:30 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:16.332 00:11:30 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:16.332 00:11:30 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:16.333 00:11:30 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:16.333 00:11:30 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:17.710 00:11:32 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:17.710 00:11:32 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:17.710 00:11:32 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:17.710 00:11:32 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:17.710 00:11:32 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:17.710 00:11:32 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:17.710 00:11:32 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:17.710 00:11:32 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:17.710 00:11:32 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:17.710 00:11:32 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:17.710 00:11:32 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:17.710 00:11:32 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:17.710 00:11:32 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:17.710 00:11:32 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:17.710 00:11:32 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:17.710 00:11:32 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:17.710 00:11:32 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:17.710 00:11:32 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:17.710 00:11:32 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:17.710 00:11:32 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:17.710 00:11:32 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:17.710 00:11:32 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:17.710 00:11:32 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:17.710 00:11:32 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:17.710 00:11:32 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:17.710 00:11:32 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:07:17.710 00:11:32 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:17.710 00:07:17.710 real 0m1.443s 00:07:17.710 user 0m1.211s 00:07:17.710 sys 0m0.147s 00:07:17.710 00:11:32 accel.accel_xor -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:17.710 00:11:32 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:07:17.710 ************************************ 00:07:17.710 END TEST accel_xor 00:07:17.710 ************************************ 00:07:17.710 00:11:32 accel -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:07:17.710 00:11:32 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:07:17.710 00:11:32 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:17.710 00:11:32 accel -- common/autotest_common.sh@10 -- # set +x 00:07:17.710 ************************************ 00:07:17.710 START TEST accel_xor 00:07:17.710 ************************************ 00:07:17.710 00:11:32 accel.accel_xor -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w xor -y -x 3 00:07:17.710 00:11:32 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:07:17.710 00:11:32 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:07:17.710 00:11:32 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:17.710 00:11:32 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:17.710 00:11:32 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:07:17.710 00:11:32 accel.accel_xor -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:07:17.710 00:11:32 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:07:17.710 00:11:32 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:17.710 00:11:32 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:17.710 00:11:32 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:17.710 00:11:32 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:17.710 00:11:32 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:17.710 00:11:32 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:07:17.710 00:11:32 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:07:17.710 [2024-07-23 00:11:32.243041] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:07:17.710 [2024-07-23 00:11:32.243168] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76489 ] 00:07:17.710 [2024-07-23 00:11:32.385997] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:17.969 [2024-07-23 00:11:32.427055] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:17.969 00:11:32 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:17.969 00:11:32 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:17.969 00:11:32 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:17.969 00:11:32 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:17.969 00:11:32 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:17.969 00:11:32 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:17.969 00:11:32 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:17.969 00:11:32 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:17.969 00:11:32 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:07:17.969 00:11:32 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:17.969 00:11:32 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:17.969 00:11:32 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:17.969 00:11:32 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:17.969 00:11:32 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:17.969 00:11:32 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:17.969 00:11:32 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:17.969 00:11:32 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:17.969 00:11:32 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:17.969 00:11:32 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:17.969 00:11:32 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:17.969 00:11:32 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:07:17.969 00:11:32 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:17.969 00:11:32 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:07:17.969 00:11:32 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:17.969 00:11:32 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:17.969 00:11:32 accel.accel_xor -- accel/accel.sh@20 -- # val=3 00:07:17.969 00:11:32 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:17.969 00:11:32 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:17.969 00:11:32 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:17.969 00:11:32 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:17.969 00:11:32 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:17.969 00:11:32 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:17.969 00:11:32 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:17.969 00:11:32 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:17.969 00:11:32 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:17.969 00:11:32 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:17.969 00:11:32 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:17.969 00:11:32 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:07:17.969 00:11:32 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:17.969 00:11:32 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:07:17.969 00:11:32 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:17.969 00:11:32 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:17.969 00:11:32 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:17.969 00:11:32 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:17.969 00:11:32 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:17.969 00:11:32 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:17.969 00:11:32 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:17.969 00:11:32 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:17.969 00:11:32 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:17.969 00:11:32 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:17.969 00:11:32 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:07:17.969 00:11:32 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:17.969 00:11:32 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:17.969 00:11:32 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:17.969 00:11:32 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:07:17.969 00:11:32 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:17.969 00:11:32 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:17.969 00:11:32 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:17.969 00:11:32 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:07:17.969 00:11:32 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:17.969 00:11:32 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:17.969 00:11:32 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:17.969 00:11:32 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:17.969 00:11:32 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:17.969 00:11:32 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:17.969 00:11:32 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:17.969 00:11:32 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:17.969 00:11:32 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:17.969 00:11:32 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:17.969 00:11:32 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:19.345 00:11:33 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:19.345 00:11:33 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:19.345 00:11:33 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:19.345 00:11:33 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:19.345 00:11:33 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:19.345 00:11:33 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:19.345 00:11:33 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:19.345 00:11:33 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:19.345 00:11:33 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:19.345 00:11:33 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:19.345 00:11:33 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:19.345 00:11:33 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:19.345 00:11:33 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:19.345 00:11:33 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:19.345 00:11:33 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:19.345 00:11:33 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:19.345 00:11:33 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:19.345 00:11:33 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:19.345 00:11:33 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:19.345 00:11:33 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:19.345 00:11:33 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:19.345 00:11:33 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:19.345 00:11:33 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:19.345 00:11:33 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:19.345 00:11:33 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:19.345 00:11:33 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:07:19.345 00:11:33 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:19.345 00:07:19.345 real 0m1.442s 00:07:19.345 user 0m1.205s 00:07:19.345 sys 0m0.152s 00:07:19.345 00:11:33 accel.accel_xor -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:19.345 00:11:33 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:07:19.345 ************************************ 00:07:19.345 END TEST accel_xor 00:07:19.345 ************************************ 00:07:19.346 00:11:33 accel -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:07:19.346 00:11:33 accel -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:07:19.346 00:11:33 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:19.346 00:11:33 accel -- common/autotest_common.sh@10 -- # set +x 00:07:19.346 ************************************ 00:07:19.346 START TEST accel_dif_verify 00:07:19.346 ************************************ 00:07:19.346 00:11:33 accel.accel_dif_verify -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w dif_verify 00:07:19.346 00:11:33 accel.accel_dif_verify -- accel/accel.sh@16 -- # local accel_opc 00:07:19.346 00:11:33 accel.accel_dif_verify -- accel/accel.sh@17 -- # local accel_module 00:07:19.346 00:11:33 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:19.346 00:11:33 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:19.346 00:11:33 accel.accel_dif_verify -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:07:19.346 00:11:33 accel.accel_dif_verify -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:07:19.346 00:11:33 accel.accel_dif_verify -- accel/accel.sh@12 -- # build_accel_config 00:07:19.346 00:11:33 accel.accel_dif_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:19.346 00:11:33 accel.accel_dif_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:19.346 00:11:33 accel.accel_dif_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:19.346 00:11:33 accel.accel_dif_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:19.346 00:11:33 accel.accel_dif_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:19.346 00:11:33 accel.accel_dif_verify -- accel/accel.sh@40 -- # local IFS=, 00:07:19.346 00:11:33 accel.accel_dif_verify -- accel/accel.sh@41 -- # jq -r . 00:07:19.346 [2024-07-23 00:11:33.749558] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:07:19.346 [2024-07-23 00:11:33.750056] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76525 ] 00:07:19.346 [2024-07-23 00:11:33.900555] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:19.346 [2024-07-23 00:11:33.946713] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:19.346 00:11:33 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:19.346 00:11:33 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:19.346 00:11:33 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:19.346 00:11:33 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:19.346 00:11:33 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:19.346 00:11:33 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:19.346 00:11:33 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:19.346 00:11:33 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:19.346 00:11:33 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=0x1 00:07:19.346 00:11:33 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:19.346 00:11:33 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:19.346 00:11:33 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:19.346 00:11:33 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:19.346 00:11:33 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:19.346 00:11:33 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:19.346 00:11:33 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:19.346 00:11:33 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:19.346 00:11:33 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:19.346 00:11:33 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:19.346 00:11:33 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:19.346 00:11:33 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=dif_verify 00:07:19.346 00:11:33 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:19.346 00:11:33 accel.accel_dif_verify -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:07:19.346 00:11:33 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:19.346 00:11:33 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:19.346 00:11:33 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:19.346 00:11:33 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:19.346 00:11:33 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:19.346 00:11:33 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:19.346 00:11:33 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:19.346 00:11:33 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:19.346 00:11:33 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:19.346 00:11:33 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:19.346 00:11:33 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='512 bytes' 00:07:19.346 00:11:33 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:19.346 00:11:33 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:19.346 00:11:33 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:19.346 00:11:33 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='8 bytes' 00:07:19.346 00:11:33 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:19.346 00:11:33 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:19.346 00:11:34 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:19.346 00:11:34 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:19.346 00:11:34 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:19.346 00:11:34 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:19.346 00:11:34 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:19.346 00:11:34 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=software 00:07:19.346 00:11:34 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:19.346 00:11:34 accel.accel_dif_verify -- accel/accel.sh@22 -- # accel_module=software 00:07:19.346 00:11:34 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:19.346 00:11:34 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:19.346 00:11:34 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:07:19.346 00:11:34 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:19.346 00:11:34 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:19.346 00:11:34 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:19.346 00:11:34 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:07:19.346 00:11:34 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:19.346 00:11:34 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:19.346 00:11:34 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:19.346 00:11:34 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=1 00:07:19.346 00:11:34 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:19.346 00:11:34 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:19.346 00:11:34 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:19.346 00:11:34 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='1 seconds' 00:07:19.346 00:11:34 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:19.346 00:11:34 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:19.346 00:11:34 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:19.346 00:11:34 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=No 00:07:19.346 00:11:34 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:19.346 00:11:34 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:19.346 00:11:34 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:19.346 00:11:34 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:19.346 00:11:34 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:19.346 00:11:34 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:19.346 00:11:34 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:19.346 00:11:34 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:19.346 00:11:34 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:19.346 00:11:34 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:19.346 00:11:34 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:20.721 00:11:35 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:20.721 00:11:35 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:20.721 00:11:35 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:20.721 00:11:35 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:20.721 00:11:35 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:20.721 00:11:35 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:20.721 00:11:35 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:20.721 00:11:35 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:20.721 00:11:35 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:20.721 00:11:35 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:20.721 00:11:35 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:20.721 00:11:35 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:20.721 00:11:35 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:20.721 00:11:35 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:20.721 00:11:35 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:20.721 00:11:35 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:20.721 00:11:35 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:20.721 00:11:35 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:20.721 00:11:35 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:20.721 00:11:35 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:20.721 00:11:35 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:20.721 00:11:35 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:20.721 00:11:35 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:20.721 00:11:35 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:20.721 00:11:35 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:20.721 00:11:35 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:07:20.721 00:11:35 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:20.721 00:07:20.721 real 0m1.457s 00:07:20.721 user 0m1.223s 00:07:20.721 sys 0m0.150s 00:07:20.721 00:11:35 accel.accel_dif_verify -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:20.721 00:11:35 accel.accel_dif_verify -- common/autotest_common.sh@10 -- # set +x 00:07:20.721 ************************************ 00:07:20.721 END TEST accel_dif_verify 00:07:20.721 ************************************ 00:07:20.721 00:11:35 accel -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:07:20.721 00:11:35 accel -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:07:20.721 00:11:35 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:20.721 00:11:35 accel -- common/autotest_common.sh@10 -- # set +x 00:07:20.721 ************************************ 00:07:20.721 START TEST accel_dif_generate 00:07:20.721 ************************************ 00:07:20.721 00:11:35 accel.accel_dif_generate -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w dif_generate 00:07:20.721 00:11:35 accel.accel_dif_generate -- accel/accel.sh@16 -- # local accel_opc 00:07:20.721 00:11:35 accel.accel_dif_generate -- accel/accel.sh@17 -- # local accel_module 00:07:20.721 00:11:35 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:20.721 00:11:35 accel.accel_dif_generate -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:07:20.721 00:11:35 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:20.721 00:11:35 accel.accel_dif_generate -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:07:20.721 00:11:35 accel.accel_dif_generate -- accel/accel.sh@12 -- # build_accel_config 00:07:20.721 00:11:35 accel.accel_dif_generate -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:20.721 00:11:35 accel.accel_dif_generate -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:20.721 00:11:35 accel.accel_dif_generate -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:20.721 00:11:35 accel.accel_dif_generate -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:20.721 00:11:35 accel.accel_dif_generate -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:20.721 00:11:35 accel.accel_dif_generate -- accel/accel.sh@40 -- # local IFS=, 00:07:20.721 00:11:35 accel.accel_dif_generate -- accel/accel.sh@41 -- # jq -r . 00:07:20.721 [2024-07-23 00:11:35.272692] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:07:20.721 [2024-07-23 00:11:35.272843] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76560 ] 00:07:20.980 [2024-07-23 00:11:35.423941] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:20.980 [2024-07-23 00:11:35.469527] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:20.980 00:11:35 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:20.980 00:11:35 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:20.980 00:11:35 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:20.980 00:11:35 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:20.980 00:11:35 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:20.980 00:11:35 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:20.980 00:11:35 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:20.980 00:11:35 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:20.980 00:11:35 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=0x1 00:07:20.980 00:11:35 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:20.980 00:11:35 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:20.980 00:11:35 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:20.980 00:11:35 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:20.980 00:11:35 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:20.980 00:11:35 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:20.980 00:11:35 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:20.980 00:11:35 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:20.980 00:11:35 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:20.980 00:11:35 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:20.980 00:11:35 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:20.980 00:11:35 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=dif_generate 00:07:20.980 00:11:35 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:20.980 00:11:35 accel.accel_dif_generate -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:07:20.980 00:11:35 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:20.980 00:11:35 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:20.980 00:11:35 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:20.980 00:11:35 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:20.980 00:11:35 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:20.980 00:11:35 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:20.980 00:11:35 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:20.980 00:11:35 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:20.980 00:11:35 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:20.980 00:11:35 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:20.980 00:11:35 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='512 bytes' 00:07:20.980 00:11:35 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:20.980 00:11:35 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:20.980 00:11:35 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:20.980 00:11:35 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='8 bytes' 00:07:20.980 00:11:35 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:20.980 00:11:35 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:20.980 00:11:35 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:20.980 00:11:35 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:20.980 00:11:35 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:20.980 00:11:35 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:20.980 00:11:35 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:20.980 00:11:35 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=software 00:07:20.980 00:11:35 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:20.980 00:11:35 accel.accel_dif_generate -- accel/accel.sh@22 -- # accel_module=software 00:07:20.980 00:11:35 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:20.980 00:11:35 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:20.980 00:11:35 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:07:20.980 00:11:35 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:20.980 00:11:35 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:20.980 00:11:35 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:20.980 00:11:35 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:07:20.980 00:11:35 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:20.980 00:11:35 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:20.980 00:11:35 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:20.980 00:11:35 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=1 00:07:20.980 00:11:35 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:20.980 00:11:35 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:20.981 00:11:35 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:20.981 00:11:35 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='1 seconds' 00:07:20.981 00:11:35 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:20.981 00:11:35 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:20.981 00:11:35 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:20.981 00:11:35 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=No 00:07:20.981 00:11:35 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:20.981 00:11:35 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:20.981 00:11:35 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:20.981 00:11:35 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:20.981 00:11:35 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:20.981 00:11:35 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:20.981 00:11:35 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:20.981 00:11:35 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:20.981 00:11:35 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:20.981 00:11:35 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:20.981 00:11:35 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:22.357 00:11:36 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:22.357 00:11:36 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:22.357 00:11:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:22.357 00:11:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:22.357 00:11:36 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:22.357 00:11:36 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:22.357 00:11:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:22.357 00:11:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:22.357 00:11:36 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:22.357 00:11:36 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:22.357 00:11:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:22.357 00:11:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:22.357 00:11:36 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:22.357 00:11:36 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:22.357 00:11:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:22.357 00:11:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:22.357 00:11:36 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:22.357 00:11:36 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:22.357 00:11:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:22.357 00:11:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:22.357 00:11:36 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:22.357 00:11:36 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:22.357 00:11:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:22.357 00:11:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:22.357 00:11:36 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:22.357 00:11:36 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:07:22.357 00:11:36 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:22.357 00:07:22.357 real 0m1.456s 00:07:22.357 user 0m1.227s 00:07:22.357 sys 0m0.146s 00:07:22.357 00:11:36 accel.accel_dif_generate -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:22.357 ************************************ 00:07:22.357 END TEST accel_dif_generate 00:07:22.357 ************************************ 00:07:22.357 00:11:36 accel.accel_dif_generate -- common/autotest_common.sh@10 -- # set +x 00:07:22.357 00:11:36 accel -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:07:22.357 00:11:36 accel -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:07:22.357 00:11:36 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:22.357 00:11:36 accel -- common/autotest_common.sh@10 -- # set +x 00:07:22.357 ************************************ 00:07:22.357 START TEST accel_dif_generate_copy 00:07:22.357 ************************************ 00:07:22.357 00:11:36 accel.accel_dif_generate_copy -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w dif_generate_copy 00:07:22.357 00:11:36 accel.accel_dif_generate_copy -- accel/accel.sh@16 -- # local accel_opc 00:07:22.357 00:11:36 accel.accel_dif_generate_copy -- accel/accel.sh@17 -- # local accel_module 00:07:22.357 00:11:36 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:22.357 00:11:36 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:22.357 00:11:36 accel.accel_dif_generate_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:07:22.357 00:11:36 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:07:22.357 00:11:36 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # build_accel_config 00:07:22.357 00:11:36 accel.accel_dif_generate_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:22.357 00:11:36 accel.accel_dif_generate_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:22.357 00:11:36 accel.accel_dif_generate_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:22.357 00:11:36 accel.accel_dif_generate_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:22.357 00:11:36 accel.accel_dif_generate_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:22.357 00:11:36 accel.accel_dif_generate_copy -- accel/accel.sh@40 -- # local IFS=, 00:07:22.358 00:11:36 accel.accel_dif_generate_copy -- accel/accel.sh@41 -- # jq -r . 00:07:22.358 [2024-07-23 00:11:36.793784] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:07:22.358 [2024-07-23 00:11:36.793940] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76596 ] 00:07:22.358 [2024-07-23 00:11:36.946196] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:22.358 [2024-07-23 00:11:36.996342] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:22.616 00:11:37 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:22.616 00:11:37 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:22.616 00:11:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:22.616 00:11:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:22.616 00:11:37 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:22.616 00:11:37 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:22.616 00:11:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:22.616 00:11:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:22.617 00:11:37 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=0x1 00:07:22.617 00:11:37 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:22.617 00:11:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:22.617 00:11:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:22.617 00:11:37 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:22.617 00:11:37 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:22.617 00:11:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:22.617 00:11:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:22.617 00:11:37 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:22.617 00:11:37 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:22.617 00:11:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:22.617 00:11:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:22.617 00:11:37 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=dif_generate_copy 00:07:22.617 00:11:37 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:22.617 00:11:37 accel.accel_dif_generate_copy -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:07:22.617 00:11:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:22.617 00:11:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:22.617 00:11:37 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:22.617 00:11:37 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:22.617 00:11:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:22.617 00:11:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:22.617 00:11:37 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:22.617 00:11:37 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:22.617 00:11:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:22.617 00:11:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:22.617 00:11:37 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:22.617 00:11:37 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:22.617 00:11:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:22.617 00:11:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:22.617 00:11:37 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=software 00:07:22.617 00:11:37 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:22.617 00:11:37 accel.accel_dif_generate_copy -- accel/accel.sh@22 -- # accel_module=software 00:07:22.617 00:11:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:22.617 00:11:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:22.617 00:11:37 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:07:22.617 00:11:37 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:22.617 00:11:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:22.617 00:11:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:22.617 00:11:37 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:07:22.617 00:11:37 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:22.617 00:11:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:22.617 00:11:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:22.617 00:11:37 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=1 00:07:22.617 00:11:37 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:22.617 00:11:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:22.617 00:11:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:22.617 00:11:37 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:07:22.617 00:11:37 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:22.617 00:11:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:22.617 00:11:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:22.617 00:11:37 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=No 00:07:22.617 00:11:37 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:22.617 00:11:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:22.617 00:11:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:22.617 00:11:37 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:22.617 00:11:37 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:22.617 00:11:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:22.617 00:11:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:22.617 00:11:37 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:22.617 00:11:37 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:22.617 00:11:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:22.617 00:11:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:23.553 00:11:38 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:23.553 00:11:38 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:23.553 00:11:38 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:23.553 00:11:38 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:23.553 00:11:38 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:23.553 00:11:38 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:23.553 00:11:38 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:23.553 00:11:38 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:23.553 00:11:38 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:23.553 00:11:38 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:23.553 00:11:38 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:23.553 00:11:38 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:23.553 00:11:38 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:23.553 00:11:38 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:23.553 00:11:38 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:23.553 00:11:38 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:23.553 00:11:38 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:23.553 00:11:38 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:23.553 00:11:38 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:23.553 00:11:38 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:23.553 00:11:38 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:23.553 00:11:38 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:23.553 00:11:38 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:23.553 00:11:38 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:23.553 00:11:38 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:23.553 00:11:38 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:07:23.553 00:11:38 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:23.553 00:07:23.553 real 0m1.460s 00:07:23.553 user 0m0.021s 00:07:23.553 sys 0m0.004s 00:07:23.553 ************************************ 00:07:23.553 END TEST accel_dif_generate_copy 00:07:23.553 ************************************ 00:07:23.553 00:11:38 accel.accel_dif_generate_copy -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:23.553 00:11:38 accel.accel_dif_generate_copy -- common/autotest_common.sh@10 -- # set +x 00:07:23.813 00:11:38 accel -- accel/accel.sh@115 -- # [[ y == y ]] 00:07:23.813 00:11:38 accel -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:23.813 00:11:38 accel -- common/autotest_common.sh@1097 -- # '[' 8 -le 1 ']' 00:07:23.813 00:11:38 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:23.813 00:11:38 accel -- common/autotest_common.sh@10 -- # set +x 00:07:23.813 ************************************ 00:07:23.813 START TEST accel_comp 00:07:23.813 ************************************ 00:07:23.813 00:11:38 accel.accel_comp -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:23.813 00:11:38 accel.accel_comp -- accel/accel.sh@16 -- # local accel_opc 00:07:23.813 00:11:38 accel.accel_comp -- accel/accel.sh@17 -- # local accel_module 00:07:23.813 00:11:38 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:23.813 00:11:38 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:23.813 00:11:38 accel.accel_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:23.813 00:11:38 accel.accel_comp -- accel/accel.sh@12 -- # build_accel_config 00:07:23.813 00:11:38 accel.accel_comp -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:23.813 00:11:38 accel.accel_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:23.813 00:11:38 accel.accel_comp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:23.813 00:11:38 accel.accel_comp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:23.813 00:11:38 accel.accel_comp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:23.813 00:11:38 accel.accel_comp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:23.813 00:11:38 accel.accel_comp -- accel/accel.sh@40 -- # local IFS=, 00:07:23.813 00:11:38 accel.accel_comp -- accel/accel.sh@41 -- # jq -r . 00:07:23.813 [2024-07-23 00:11:38.318662] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:07:23.813 [2024-07-23 00:11:38.318817] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76631 ] 00:07:23.813 [2024-07-23 00:11:38.471210] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:24.072 [2024-07-23 00:11:38.521045] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:24.072 00:11:38 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:24.072 00:11:38 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:24.072 00:11:38 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:24.072 00:11:38 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:24.072 00:11:38 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:24.072 00:11:38 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:24.072 00:11:38 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:24.072 00:11:38 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:24.072 00:11:38 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:24.072 00:11:38 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:24.072 00:11:38 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:24.072 00:11:38 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:24.072 00:11:38 accel.accel_comp -- accel/accel.sh@20 -- # val=0x1 00:07:24.072 00:11:38 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:24.072 00:11:38 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:24.072 00:11:38 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:24.072 00:11:38 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:24.072 00:11:38 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:24.072 00:11:38 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:24.072 00:11:38 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:24.072 00:11:38 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:24.072 00:11:38 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:24.072 00:11:38 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:24.072 00:11:38 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:24.072 00:11:38 accel.accel_comp -- accel/accel.sh@20 -- # val=compress 00:07:24.072 00:11:38 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:24.072 00:11:38 accel.accel_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:07:24.072 00:11:38 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:24.072 00:11:38 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:24.072 00:11:38 accel.accel_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:24.072 00:11:38 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:24.072 00:11:38 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:24.072 00:11:38 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:24.072 00:11:38 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:24.072 00:11:38 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:24.072 00:11:38 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:24.072 00:11:38 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:24.072 00:11:38 accel.accel_comp -- accel/accel.sh@20 -- # val=software 00:07:24.072 00:11:38 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:24.072 00:11:38 accel.accel_comp -- accel/accel.sh@22 -- # accel_module=software 00:07:24.072 00:11:38 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:24.072 00:11:38 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:24.072 00:11:38 accel.accel_comp -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:24.072 00:11:38 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:24.072 00:11:38 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:24.072 00:11:38 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:24.072 00:11:38 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:07:24.072 00:11:38 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:24.072 00:11:38 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:24.072 00:11:38 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:24.072 00:11:38 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:07:24.072 00:11:38 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:24.072 00:11:38 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:24.072 00:11:38 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:24.072 00:11:38 accel.accel_comp -- accel/accel.sh@20 -- # val=1 00:07:24.072 00:11:38 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:24.072 00:11:38 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:24.072 00:11:38 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:24.072 00:11:38 accel.accel_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:07:24.072 00:11:38 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:24.072 00:11:38 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:24.072 00:11:38 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:24.072 00:11:38 accel.accel_comp -- accel/accel.sh@20 -- # val=No 00:07:24.072 00:11:38 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:24.072 00:11:38 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:24.072 00:11:38 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:24.072 00:11:38 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:24.072 00:11:38 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:24.072 00:11:38 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:24.072 00:11:38 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:24.073 00:11:38 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:24.073 00:11:38 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:24.073 00:11:38 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:24.073 00:11:38 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:25.451 00:11:39 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:25.451 00:11:39 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:25.451 00:11:39 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:25.451 00:11:39 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:25.451 00:11:39 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:25.451 00:11:39 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:25.451 00:11:39 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:25.451 00:11:39 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:25.451 00:11:39 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:25.451 00:11:39 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:25.451 00:11:39 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:25.451 00:11:39 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:25.451 00:11:39 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:25.451 00:11:39 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:25.451 00:11:39 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:25.451 00:11:39 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:25.451 00:11:39 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:25.451 00:11:39 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:25.451 00:11:39 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:25.451 00:11:39 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:25.451 00:11:39 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:25.451 00:11:39 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:25.451 00:11:39 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:25.451 00:11:39 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:25.451 00:11:39 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:25.451 00:11:39 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:07:25.451 00:11:39 accel.accel_comp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:25.451 00:07:25.451 real 0m1.469s 00:07:25.451 user 0m0.018s 00:07:25.451 sys 0m0.007s 00:07:25.451 00:11:39 accel.accel_comp -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:25.451 ************************************ 00:07:25.451 END TEST accel_comp 00:07:25.451 ************************************ 00:07:25.451 00:11:39 accel.accel_comp -- common/autotest_common.sh@10 -- # set +x 00:07:25.451 00:11:39 accel -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:25.451 00:11:39 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:07:25.451 00:11:39 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:25.451 00:11:39 accel -- common/autotest_common.sh@10 -- # set +x 00:07:25.451 ************************************ 00:07:25.451 START TEST accel_decomp 00:07:25.451 ************************************ 00:07:25.451 00:11:39 accel.accel_decomp -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:25.451 00:11:39 accel.accel_decomp -- accel/accel.sh@16 -- # local accel_opc 00:07:25.451 00:11:39 accel.accel_decomp -- accel/accel.sh@17 -- # local accel_module 00:07:25.451 00:11:39 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:25.451 00:11:39 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:25.451 00:11:39 accel.accel_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:25.451 00:11:39 accel.accel_decomp -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:25.451 00:11:39 accel.accel_decomp -- accel/accel.sh@12 -- # build_accel_config 00:07:25.451 00:11:39 accel.accel_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:25.451 00:11:39 accel.accel_decomp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:25.451 00:11:39 accel.accel_decomp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:25.451 00:11:39 accel.accel_decomp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:25.451 00:11:39 accel.accel_decomp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:25.451 00:11:39 accel.accel_decomp -- accel/accel.sh@40 -- # local IFS=, 00:07:25.451 00:11:39 accel.accel_decomp -- accel/accel.sh@41 -- # jq -r . 00:07:25.451 [2024-07-23 00:11:39.887476] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:07:25.451 [2024-07-23 00:11:39.888082] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76667 ] 00:07:25.451 [2024-07-23 00:11:40.064539] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:25.451 [2024-07-23 00:11:40.114510] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:25.710 00:11:40 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:25.710 00:11:40 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:25.710 00:11:40 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:25.710 00:11:40 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:25.710 00:11:40 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:25.710 00:11:40 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:25.710 00:11:40 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:25.710 00:11:40 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:25.710 00:11:40 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:25.710 00:11:40 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:25.710 00:11:40 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:25.710 00:11:40 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:25.710 00:11:40 accel.accel_decomp -- accel/accel.sh@20 -- # val=0x1 00:07:25.710 00:11:40 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:25.710 00:11:40 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:25.710 00:11:40 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:25.710 00:11:40 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:25.710 00:11:40 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:25.710 00:11:40 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:25.710 00:11:40 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:25.710 00:11:40 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:25.710 00:11:40 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:25.710 00:11:40 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:25.710 00:11:40 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:25.710 00:11:40 accel.accel_decomp -- accel/accel.sh@20 -- # val=decompress 00:07:25.710 00:11:40 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:25.710 00:11:40 accel.accel_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:25.711 00:11:40 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:25.711 00:11:40 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:25.711 00:11:40 accel.accel_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:25.711 00:11:40 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:25.711 00:11:40 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:25.711 00:11:40 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:25.711 00:11:40 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:25.711 00:11:40 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:25.711 00:11:40 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:25.711 00:11:40 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:25.711 00:11:40 accel.accel_decomp -- accel/accel.sh@20 -- # val=software 00:07:25.711 00:11:40 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:25.711 00:11:40 accel.accel_decomp -- accel/accel.sh@22 -- # accel_module=software 00:07:25.711 00:11:40 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:25.711 00:11:40 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:25.711 00:11:40 accel.accel_decomp -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:25.711 00:11:40 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:25.711 00:11:40 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:25.711 00:11:40 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:25.711 00:11:40 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:07:25.711 00:11:40 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:25.711 00:11:40 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:25.711 00:11:40 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:25.711 00:11:40 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:07:25.711 00:11:40 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:25.711 00:11:40 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:25.711 00:11:40 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:25.711 00:11:40 accel.accel_decomp -- accel/accel.sh@20 -- # val=1 00:07:25.711 00:11:40 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:25.711 00:11:40 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:25.711 00:11:40 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:25.711 00:11:40 accel.accel_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:07:25.711 00:11:40 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:25.711 00:11:40 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:25.711 00:11:40 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:25.711 00:11:40 accel.accel_decomp -- accel/accel.sh@20 -- # val=Yes 00:07:25.711 00:11:40 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:25.711 00:11:40 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:25.711 00:11:40 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:25.711 00:11:40 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:25.711 00:11:40 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:25.711 00:11:40 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:25.711 00:11:40 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:25.711 00:11:40 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:25.711 00:11:40 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:25.711 00:11:40 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:25.711 00:11:40 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:26.687 00:11:41 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:26.687 00:11:41 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:26.687 00:11:41 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:26.687 00:11:41 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:26.687 00:11:41 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:26.687 00:11:41 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:26.687 00:11:41 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:26.687 00:11:41 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:26.687 00:11:41 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:26.687 00:11:41 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:26.687 00:11:41 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:26.687 00:11:41 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:26.687 00:11:41 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:26.687 00:11:41 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:26.687 00:11:41 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:26.688 00:11:41 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:26.688 00:11:41 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:26.688 00:11:41 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:26.688 00:11:41 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:26.688 00:11:41 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:26.688 00:11:41 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:26.688 00:11:41 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:26.688 00:11:41 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:26.688 00:11:41 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:26.688 00:11:41 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:26.688 00:11:41 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:26.688 00:11:41 accel.accel_decomp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:26.688 00:07:26.688 real 0m1.526s 00:07:26.688 user 0m1.266s 00:07:26.688 sys 0m0.174s 00:07:26.688 00:11:41 accel.accel_decomp -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:26.688 00:11:41 accel.accel_decomp -- common/autotest_common.sh@10 -- # set +x 00:07:26.688 ************************************ 00:07:26.688 END TEST accel_decomp 00:07:26.688 ************************************ 00:07:26.947 00:11:41 accel -- accel/accel.sh@118 -- # run_test accel_decmop_full accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:07:26.947 00:11:41 accel -- common/autotest_common.sh@1097 -- # '[' 11 -le 1 ']' 00:07:26.947 00:11:41 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:26.947 00:11:41 accel -- common/autotest_common.sh@10 -- # set +x 00:07:26.947 ************************************ 00:07:26.947 START TEST accel_decmop_full 00:07:26.947 ************************************ 00:07:26.947 00:11:41 accel.accel_decmop_full -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:07:26.947 00:11:41 accel.accel_decmop_full -- accel/accel.sh@16 -- # local accel_opc 00:07:26.947 00:11:41 accel.accel_decmop_full -- accel/accel.sh@17 -- # local accel_module 00:07:26.947 00:11:41 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:26.947 00:11:41 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:26.947 00:11:41 accel.accel_decmop_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:07:26.947 00:11:41 accel.accel_decmop_full -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:07:26.947 00:11:41 accel.accel_decmop_full -- accel/accel.sh@12 -- # build_accel_config 00:07:26.947 00:11:41 accel.accel_decmop_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:26.947 00:11:41 accel.accel_decmop_full -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:26.947 00:11:41 accel.accel_decmop_full -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:26.947 00:11:41 accel.accel_decmop_full -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:26.947 00:11:41 accel.accel_decmop_full -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:26.947 00:11:41 accel.accel_decmop_full -- accel/accel.sh@40 -- # local IFS=, 00:07:26.947 00:11:41 accel.accel_decmop_full -- accel/accel.sh@41 -- # jq -r . 00:07:26.947 [2024-07-23 00:11:41.452141] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:07:26.947 [2024-07-23 00:11:41.452507] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76701 ] 00:07:26.947 [2024-07-23 00:11:41.600656] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:27.206 [2024-07-23 00:11:41.650398] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:27.206 00:11:41 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:27.206 00:11:41 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:27.206 00:11:41 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:27.206 00:11:41 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:27.206 00:11:41 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:27.206 00:11:41 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:27.206 00:11:41 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:27.206 00:11:41 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:27.206 00:11:41 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:27.206 00:11:41 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:27.206 00:11:41 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:27.206 00:11:41 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:27.206 00:11:41 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=0x1 00:07:27.206 00:11:41 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:27.206 00:11:41 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:27.206 00:11:41 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:27.206 00:11:41 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:27.206 00:11:41 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:27.206 00:11:41 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:27.206 00:11:41 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:27.206 00:11:41 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:27.206 00:11:41 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:27.206 00:11:41 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:27.206 00:11:41 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:27.206 00:11:41 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=decompress 00:07:27.206 00:11:41 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:27.206 00:11:41 accel.accel_decmop_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:27.206 00:11:41 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:27.206 00:11:41 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:27.206 00:11:41 accel.accel_decmop_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:07:27.206 00:11:41 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:27.206 00:11:41 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:27.206 00:11:41 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:27.206 00:11:41 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:27.207 00:11:41 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:27.207 00:11:41 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:27.207 00:11:41 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:27.207 00:11:41 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=software 00:07:27.207 00:11:41 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:27.207 00:11:41 accel.accel_decmop_full -- accel/accel.sh@22 -- # accel_module=software 00:07:27.207 00:11:41 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:27.207 00:11:41 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:27.207 00:11:41 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:27.207 00:11:41 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:27.207 00:11:41 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:27.207 00:11:41 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:27.207 00:11:41 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=32 00:07:27.207 00:11:41 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:27.207 00:11:41 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:27.207 00:11:41 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:27.207 00:11:41 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=32 00:07:27.207 00:11:41 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:27.207 00:11:41 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:27.207 00:11:41 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:27.207 00:11:41 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=1 00:07:27.207 00:11:41 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:27.207 00:11:41 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:27.207 00:11:41 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:27.207 00:11:41 accel.accel_decmop_full -- accel/accel.sh@20 -- # val='1 seconds' 00:07:27.207 00:11:41 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:27.207 00:11:41 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:27.207 00:11:41 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:27.207 00:11:41 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=Yes 00:07:27.207 00:11:41 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:27.207 00:11:41 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:27.207 00:11:41 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:27.207 00:11:41 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:27.207 00:11:41 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:27.207 00:11:41 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:27.207 00:11:41 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:27.207 00:11:41 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:27.207 00:11:41 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:27.207 00:11:41 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:27.207 00:11:41 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:28.585 00:11:42 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:28.585 00:11:42 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:28.585 00:11:42 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:28.585 00:11:42 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:28.585 00:11:42 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:28.585 00:11:42 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:28.585 00:11:42 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:28.586 00:11:42 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:28.586 00:11:42 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:28.586 00:11:42 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:28.586 00:11:42 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:28.586 00:11:42 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:28.586 00:11:42 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:28.586 00:11:42 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:28.586 00:11:42 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:28.586 00:11:42 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:28.586 00:11:42 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:28.586 00:11:42 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:28.586 00:11:42 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:28.586 00:11:42 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:28.586 00:11:42 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:28.586 00:11:42 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:28.586 00:11:42 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:28.586 00:11:42 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:28.586 00:11:42 accel.accel_decmop_full -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:28.586 00:11:42 accel.accel_decmop_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:28.586 ************************************ 00:07:28.586 END TEST accel_decmop_full 00:07:28.586 ************************************ 00:07:28.586 00:11:42 accel.accel_decmop_full -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:28.586 00:07:28.586 real 0m1.482s 00:07:28.586 user 0m1.236s 00:07:28.586 sys 0m0.156s 00:07:28.586 00:11:42 accel.accel_decmop_full -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:28.586 00:11:42 accel.accel_decmop_full -- common/autotest_common.sh@10 -- # set +x 00:07:28.586 00:11:42 accel -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:07:28.586 00:11:42 accel -- common/autotest_common.sh@1097 -- # '[' 11 -le 1 ']' 00:07:28.586 00:11:42 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:28.586 00:11:42 accel -- common/autotest_common.sh@10 -- # set +x 00:07:28.586 ************************************ 00:07:28.586 START TEST accel_decomp_mcore 00:07:28.586 ************************************ 00:07:28.586 00:11:42 accel.accel_decomp_mcore -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:07:28.586 00:11:42 accel.accel_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:07:28.586 00:11:42 accel.accel_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:07:28.586 00:11:42 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:28.586 00:11:42 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:28.586 00:11:42 accel.accel_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:07:28.586 00:11:42 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:07:28.586 00:11:42 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:07:28.586 00:11:42 accel.accel_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:28.586 00:11:42 accel.accel_decomp_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:28.586 00:11:42 accel.accel_decomp_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:28.586 00:11:42 accel.accel_decomp_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:28.586 00:11:42 accel.accel_decomp_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:28.586 00:11:42 accel.accel_decomp_mcore -- accel/accel.sh@40 -- # local IFS=, 00:07:28.586 00:11:42 accel.accel_decomp_mcore -- accel/accel.sh@41 -- # jq -r . 00:07:28.586 [2024-07-23 00:11:43.002795] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:07:28.586 [2024-07-23 00:11:43.002938] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76738 ] 00:07:28.586 [2024-07-23 00:11:43.153465] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:28.586 [2024-07-23 00:11:43.206622] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:28.586 [2024-07-23 00:11:43.206818] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:28.586 [2024-07-23 00:11:43.207008] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:28.586 [2024-07-23 00:11:43.206872] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:28.586 00:11:43 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:28.586 00:11:43 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:28.586 00:11:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:28.586 00:11:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:28.586 00:11:43 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:28.586 00:11:43 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:28.586 00:11:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:28.586 00:11:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:28.586 00:11:43 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:28.586 00:11:43 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:28.586 00:11:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:28.586 00:11:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:28.586 00:11:43 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:07:28.586 00:11:43 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:28.586 00:11:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:28.586 00:11:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:28.586 00:11:43 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:28.586 00:11:43 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:28.586 00:11:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:28.586 00:11:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:28.586 00:11:43 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:28.586 00:11:43 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:28.586 00:11:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:28.586 00:11:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:28.586 00:11:43 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:07:28.586 00:11:43 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:28.586 00:11:43 accel.accel_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:28.586 00:11:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:28.586 00:11:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:28.586 00:11:43 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:28.586 00:11:43 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:28.586 00:11:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:28.586 00:11:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:28.586 00:11:43 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:28.586 00:11:43 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:28.586 00:11:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:28.586 00:11:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:28.586 00:11:43 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=software 00:07:28.586 00:11:43 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:28.586 00:11:43 accel.accel_decomp_mcore -- accel/accel.sh@22 -- # accel_module=software 00:07:28.586 00:11:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:28.586 00:11:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:28.586 00:11:43 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:28.586 00:11:43 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:28.586 00:11:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:28.586 00:11:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:28.586 00:11:43 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:07:28.586 00:11:43 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:28.586 00:11:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:28.586 00:11:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:28.586 00:11:43 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:07:28.586 00:11:43 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:28.586 00:11:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:28.586 00:11:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:28.586 00:11:43 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:07:28.586 00:11:43 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:28.586 00:11:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:28.586 00:11:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:28.586 00:11:43 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:07:28.586 00:11:43 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:28.586 00:11:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:28.586 00:11:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:28.586 00:11:43 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:07:28.586 00:11:43 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:28.586 00:11:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:28.586 00:11:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:28.586 00:11:43 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:28.586 00:11:43 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:28.586 00:11:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:28.586 00:11:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:28.586 00:11:43 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:28.586 00:11:43 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:28.586 00:11:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:28.586 00:11:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:29.964 00:11:44 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:29.964 00:11:44 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:29.964 00:11:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:29.964 00:11:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:29.964 00:11:44 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:29.964 00:11:44 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:29.964 00:11:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:29.964 00:11:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:29.964 00:11:44 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:29.964 00:11:44 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:29.964 00:11:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:29.964 00:11:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:29.964 00:11:44 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:29.964 00:11:44 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:29.964 00:11:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:29.964 00:11:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:29.964 00:11:44 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:29.964 00:11:44 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:29.964 00:11:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:29.964 00:11:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:29.964 00:11:44 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:29.964 00:11:44 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:29.964 00:11:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:29.964 00:11:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:29.964 00:11:44 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:29.964 00:11:44 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:29.964 00:11:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:29.964 00:11:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:29.964 00:11:44 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:29.964 00:11:44 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:29.964 00:11:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:29.964 00:11:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:29.964 00:11:44 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:29.964 00:11:44 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:29.964 00:11:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:29.964 00:11:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:29.964 00:11:44 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:29.964 00:11:44 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:29.964 00:11:44 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:29.964 00:07:29.964 real 0m1.495s 00:07:29.964 user 0m0.015s 00:07:29.964 sys 0m0.001s 00:07:29.964 00:11:44 accel.accel_decomp_mcore -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:29.964 ************************************ 00:07:29.964 END TEST accel_decomp_mcore 00:07:29.964 ************************************ 00:07:29.964 00:11:44 accel.accel_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:07:29.964 00:11:44 accel -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:29.964 00:11:44 accel -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:07:29.964 00:11:44 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:29.964 00:11:44 accel -- common/autotest_common.sh@10 -- # set +x 00:07:29.964 ************************************ 00:07:29.964 START TEST accel_decomp_full_mcore 00:07:29.964 ************************************ 00:07:29.964 00:11:44 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:29.964 00:11:44 accel.accel_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:07:29.964 00:11:44 accel.accel_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:07:29.964 00:11:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:29.965 00:11:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:29.965 00:11:44 accel.accel_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:29.965 00:11:44 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:29.965 00:11:44 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:07:29.965 00:11:44 accel.accel_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:29.965 00:11:44 accel.accel_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:29.965 00:11:44 accel.accel_decomp_full_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:29.965 00:11:44 accel.accel_decomp_full_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:29.965 00:11:44 accel.accel_decomp_full_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:29.965 00:11:44 accel.accel_decomp_full_mcore -- accel/accel.sh@40 -- # local IFS=, 00:07:29.965 00:11:44 accel.accel_decomp_full_mcore -- accel/accel.sh@41 -- # jq -r . 00:07:29.965 [2024-07-23 00:11:44.559108] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:07:29.965 [2024-07-23 00:11:44.559257] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76771 ] 00:07:30.224 [2024-07-23 00:11:44.712574] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:30.224 [2024-07-23 00:11:44.765816] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:30.224 [2024-07-23 00:11:44.766061] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:30.224 [2024-07-23 00:11:44.766166] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:30.224 [2024-07-23 00:11:44.766255] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:30.224 00:11:44 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:30.224 00:11:44 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:30.224 00:11:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:30.224 00:11:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:30.224 00:11:44 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:30.224 00:11:44 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:30.224 00:11:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:30.224 00:11:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:30.224 00:11:44 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:30.224 00:11:44 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:30.224 00:11:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:30.224 00:11:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:30.224 00:11:44 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:07:30.224 00:11:44 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:30.224 00:11:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:30.224 00:11:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:30.224 00:11:44 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:30.224 00:11:44 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:30.224 00:11:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:30.224 00:11:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:30.224 00:11:44 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:30.224 00:11:44 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:30.224 00:11:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:30.224 00:11:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:30.224 00:11:44 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:07:30.224 00:11:44 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:30.224 00:11:44 accel.accel_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:30.224 00:11:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:30.224 00:11:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:30.224 00:11:44 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:07:30.224 00:11:44 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:30.224 00:11:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:30.224 00:11:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:30.224 00:11:44 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:30.224 00:11:44 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:30.224 00:11:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:30.224 00:11:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:30.224 00:11:44 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=software 00:07:30.224 00:11:44 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:30.224 00:11:44 accel.accel_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=software 00:07:30.224 00:11:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:30.224 00:11:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:30.224 00:11:44 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:30.224 00:11:44 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:30.224 00:11:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:30.224 00:11:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:30.224 00:11:44 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:07:30.224 00:11:44 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:30.224 00:11:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:30.224 00:11:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:30.224 00:11:44 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:07:30.224 00:11:44 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:30.224 00:11:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:30.224 00:11:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:30.224 00:11:44 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:07:30.224 00:11:44 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:30.224 00:11:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:30.224 00:11:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:30.224 00:11:44 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:07:30.224 00:11:44 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:30.224 00:11:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:30.224 00:11:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:30.224 00:11:44 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:07:30.224 00:11:44 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:30.224 00:11:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:30.224 00:11:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:30.224 00:11:44 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:30.224 00:11:44 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:30.224 00:11:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:30.224 00:11:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:30.224 00:11:44 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:30.224 00:11:44 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:30.225 00:11:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:30.225 00:11:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:31.604 00:11:45 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:31.604 00:11:45 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:31.604 00:11:45 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:31.604 00:11:45 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:31.604 00:11:45 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:31.604 00:11:45 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:31.604 00:11:45 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:31.604 00:11:45 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:31.604 00:11:45 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:31.604 00:11:45 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:31.604 00:11:45 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:31.604 00:11:45 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:31.604 00:11:45 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:31.604 00:11:45 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:31.604 00:11:45 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:31.604 00:11:45 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:31.604 00:11:45 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:31.604 00:11:45 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:31.604 00:11:45 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:31.604 00:11:45 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:31.604 00:11:45 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:31.604 00:11:45 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:31.604 00:11:45 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:31.604 00:11:45 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:31.604 00:11:45 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:31.604 00:11:45 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:31.604 00:11:45 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:31.604 00:11:45 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:31.604 00:11:45 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:31.604 00:11:45 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:31.604 00:11:45 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:31.604 00:11:45 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:31.604 00:11:46 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:31.604 00:11:46 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:31.604 00:11:46 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:31.604 00:11:46 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:31.604 ************************************ 00:07:31.604 END TEST accel_decomp_full_mcore 00:07:31.604 ************************************ 00:07:31.604 00:11:46 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:31.604 00:11:46 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:31.604 00:11:46 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:31.604 00:07:31.604 real 0m1.503s 00:07:31.604 user 0m0.018s 00:07:31.604 sys 0m0.003s 00:07:31.604 00:11:46 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:31.604 00:11:46 accel.accel_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:07:31.604 00:11:46 accel -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:07:31.604 00:11:46 accel -- common/autotest_common.sh@1097 -- # '[' 11 -le 1 ']' 00:07:31.604 00:11:46 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:31.604 00:11:46 accel -- common/autotest_common.sh@10 -- # set +x 00:07:31.604 ************************************ 00:07:31.604 START TEST accel_decomp_mthread 00:07:31.604 ************************************ 00:07:31.604 00:11:46 accel.accel_decomp_mthread -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:07:31.604 00:11:46 accel.accel_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:07:31.604 00:11:46 accel.accel_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:07:31.604 00:11:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:31.604 00:11:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:31.604 00:11:46 accel.accel_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:07:31.604 00:11:46 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:07:31.604 00:11:46 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:07:31.604 00:11:46 accel.accel_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:31.604 00:11:46 accel.accel_decomp_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:31.604 00:11:46 accel.accel_decomp_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:31.604 00:11:46 accel.accel_decomp_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:31.604 00:11:46 accel.accel_decomp_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:31.604 00:11:46 accel.accel_decomp_mthread -- accel/accel.sh@40 -- # local IFS=, 00:07:31.604 00:11:46 accel.accel_decomp_mthread -- accel/accel.sh@41 -- # jq -r . 00:07:31.604 [2024-07-23 00:11:46.129902] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:07:31.604 [2024-07-23 00:11:46.130043] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76815 ] 00:07:31.604 [2024-07-23 00:11:46.281360] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:31.864 [2024-07-23 00:11:46.326910] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:31.864 00:11:46 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:31.864 00:11:46 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:31.864 00:11:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:31.864 00:11:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:31.864 00:11:46 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:31.864 00:11:46 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:31.864 00:11:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:31.864 00:11:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:31.864 00:11:46 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:31.864 00:11:46 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:31.864 00:11:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:31.864 00:11:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:31.864 00:11:46 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:07:31.864 00:11:46 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:31.864 00:11:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:31.864 00:11:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:31.864 00:11:46 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:31.864 00:11:46 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:31.864 00:11:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:31.864 00:11:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:31.864 00:11:46 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:31.864 00:11:46 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:31.864 00:11:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:31.864 00:11:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:31.864 00:11:46 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:07:31.864 00:11:46 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:31.864 00:11:46 accel.accel_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:31.864 00:11:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:31.864 00:11:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:31.864 00:11:46 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:31.864 00:11:46 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:31.864 00:11:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:31.864 00:11:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:31.864 00:11:46 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:31.864 00:11:46 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:31.864 00:11:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:31.864 00:11:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:31.864 00:11:46 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=software 00:07:31.864 00:11:46 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:31.864 00:11:46 accel.accel_decomp_mthread -- accel/accel.sh@22 -- # accel_module=software 00:07:31.864 00:11:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:31.864 00:11:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:31.864 00:11:46 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:31.864 00:11:46 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:31.864 00:11:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:31.864 00:11:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:31.864 00:11:46 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:07:31.864 00:11:46 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:31.864 00:11:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:31.864 00:11:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:31.864 00:11:46 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:07:31.864 00:11:46 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:31.864 00:11:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:31.864 00:11:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:31.864 00:11:46 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:07:31.864 00:11:46 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:31.864 00:11:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:31.864 00:11:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:31.864 00:11:46 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:07:31.864 00:11:46 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:31.864 00:11:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:31.864 00:11:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:31.864 00:11:46 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:07:31.864 00:11:46 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:31.864 00:11:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:31.864 00:11:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:31.864 00:11:46 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:31.864 00:11:46 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:31.864 00:11:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:31.864 00:11:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:31.864 00:11:46 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:31.864 00:11:46 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:31.864 00:11:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:31.864 00:11:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:33.271 00:11:47 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:33.271 00:11:47 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:33.271 00:11:47 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:33.271 00:11:47 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:33.271 00:11:47 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:33.271 00:11:47 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:33.271 00:11:47 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:33.271 00:11:47 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:33.271 00:11:47 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:33.271 00:11:47 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:33.271 00:11:47 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:33.271 00:11:47 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:33.271 00:11:47 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:33.271 00:11:47 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:33.271 00:11:47 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:33.271 00:11:47 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:33.271 00:11:47 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:33.271 00:11:47 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:33.271 00:11:47 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:33.271 00:11:47 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:33.271 00:11:47 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:33.271 00:11:47 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:33.271 00:11:47 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:33.271 00:11:47 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:33.271 00:11:47 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:33.271 00:11:47 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:33.271 00:11:47 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:33.271 00:11:47 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:33.271 00:11:47 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:33.271 00:11:47 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:33.271 00:11:47 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:33.271 00:07:33.271 real 0m1.468s 00:07:33.271 user 0m0.014s 00:07:33.271 sys 0m0.001s 00:07:33.271 00:11:47 accel.accel_decomp_mthread -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:33.271 00:11:47 accel.accel_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:07:33.271 ************************************ 00:07:33.271 END TEST accel_decomp_mthread 00:07:33.271 ************************************ 00:07:33.271 00:11:47 accel -- accel/accel.sh@122 -- # run_test accel_decomp_full_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:07:33.271 00:11:47 accel -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:07:33.271 00:11:47 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:33.271 00:11:47 accel -- common/autotest_common.sh@10 -- # set +x 00:07:33.271 ************************************ 00:07:33.271 START TEST accel_decomp_full_mthread 00:07:33.271 ************************************ 00:07:33.271 00:11:47 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:07:33.271 00:11:47 accel.accel_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:07:33.271 00:11:47 accel.accel_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:07:33.271 00:11:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:33.271 00:11:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:33.271 00:11:47 accel.accel_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:07:33.272 00:11:47 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:07:33.272 00:11:47 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:07:33.272 00:11:47 accel.accel_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:33.272 00:11:47 accel.accel_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:33.272 00:11:47 accel.accel_decomp_full_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:33.272 00:11:47 accel.accel_decomp_full_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:33.272 00:11:47 accel.accel_decomp_full_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:33.272 00:11:47 accel.accel_decomp_full_mthread -- accel/accel.sh@40 -- # local IFS=, 00:07:33.272 00:11:47 accel.accel_decomp_full_mthread -- accel/accel.sh@41 -- # jq -r . 00:07:33.272 [2024-07-23 00:11:47.668289] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:07:33.272 [2024-07-23 00:11:47.668537] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76845 ] 00:07:33.272 [2024-07-23 00:11:47.817490] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:33.272 [2024-07-23 00:11:47.860417] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:33.272 00:11:47 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:33.272 00:11:47 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:33.272 00:11:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:33.272 00:11:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:33.272 00:11:47 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:33.272 00:11:47 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:33.272 00:11:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:33.272 00:11:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:33.272 00:11:47 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:33.272 00:11:47 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:33.272 00:11:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:33.272 00:11:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:33.272 00:11:47 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:07:33.272 00:11:47 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:33.272 00:11:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:33.272 00:11:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:33.272 00:11:47 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:33.272 00:11:47 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:33.272 00:11:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:33.272 00:11:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:33.272 00:11:47 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:33.272 00:11:47 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:33.272 00:11:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:33.272 00:11:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:33.272 00:11:47 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:07:33.272 00:11:47 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:33.272 00:11:47 accel.accel_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:33.272 00:11:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:33.272 00:11:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:33.272 00:11:47 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:07:33.272 00:11:47 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:33.272 00:11:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:33.272 00:11:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:33.272 00:11:47 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:33.272 00:11:47 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:33.272 00:11:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:33.272 00:11:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:33.272 00:11:47 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=software 00:07:33.272 00:11:47 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:33.272 00:11:47 accel.accel_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=software 00:07:33.272 00:11:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:33.272 00:11:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:33.272 00:11:47 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:33.272 00:11:47 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:33.272 00:11:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:33.272 00:11:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:33.272 00:11:47 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:07:33.272 00:11:47 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:33.272 00:11:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:33.272 00:11:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:33.272 00:11:47 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:07:33.272 00:11:47 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:33.272 00:11:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:33.272 00:11:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:33.272 00:11:47 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:07:33.272 00:11:47 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:33.272 00:11:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:33.272 00:11:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:33.272 00:11:47 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:07:33.272 00:11:47 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:33.272 00:11:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:33.272 00:11:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:33.272 00:11:47 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:07:33.272 00:11:47 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:33.272 00:11:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:33.272 00:11:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:33.272 00:11:47 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:33.272 00:11:47 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:33.272 00:11:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:33.272 00:11:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:33.272 00:11:47 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:33.272 00:11:47 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:33.272 00:11:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:33.272 00:11:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:34.650 00:11:49 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:34.650 00:11:49 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:34.650 00:11:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:34.650 00:11:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:34.650 00:11:49 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:34.650 00:11:49 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:34.650 00:11:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:34.650 00:11:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:34.650 00:11:49 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:34.650 00:11:49 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:34.650 00:11:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:34.650 00:11:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:34.650 00:11:49 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:34.650 00:11:49 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:34.650 00:11:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:34.650 00:11:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:34.650 00:11:49 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:34.650 00:11:49 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:34.650 00:11:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:34.650 00:11:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:34.650 00:11:49 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:34.650 00:11:49 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:34.650 00:11:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:34.650 00:11:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:34.650 00:11:49 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:34.650 00:11:49 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:34.650 00:11:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:34.650 00:11:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:34.650 00:11:49 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:34.650 00:11:49 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:34.650 00:11:49 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:34.650 00:07:34.650 real 0m1.479s 00:07:34.650 user 0m1.250s 00:07:34.650 sys 0m0.144s 00:07:34.650 ************************************ 00:07:34.650 END TEST accel_decomp_full_mthread 00:07:34.650 ************************************ 00:07:34.650 00:11:49 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:34.650 00:11:49 accel.accel_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:07:34.650 00:11:49 accel -- accel/accel.sh@124 -- # [[ n == y ]] 00:07:34.650 00:11:49 accel -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:34.650 00:11:49 accel -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:07:34.650 00:11:49 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:34.650 00:11:49 accel -- common/autotest_common.sh@10 -- # set +x 00:07:34.650 00:11:49 accel -- accel/accel.sh@137 -- # build_accel_config 00:07:34.650 00:11:49 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:34.650 00:11:49 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:34.650 00:11:49 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:34.650 00:11:49 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:34.650 00:11:49 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:34.650 00:11:49 accel -- accel/accel.sh@40 -- # local IFS=, 00:07:34.650 00:11:49 accel -- accel/accel.sh@41 -- # jq -r . 00:07:34.650 ************************************ 00:07:34.650 START TEST accel_dif_functional_tests 00:07:34.650 ************************************ 00:07:34.650 00:11:49 accel.accel_dif_functional_tests -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:34.650 [2024-07-23 00:11:49.253702] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:07:34.650 [2024-07-23 00:11:49.253813] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76887 ] 00:07:34.909 [2024-07-23 00:11:49.406609] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:34.909 [2024-07-23 00:11:49.451078] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:34.909 [2024-07-23 00:11:49.451187] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:34.909 [2024-07-23 00:11:49.451308] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:34.909 00:07:34.909 00:07:34.909 CUnit - A unit testing framework for C - Version 2.1-3 00:07:34.909 http://cunit.sourceforge.net/ 00:07:34.909 00:07:34.909 00:07:34.909 Suite: accel_dif 00:07:34.909 Test: verify: DIF generated, GUARD check ...passed 00:07:34.909 Test: verify: DIF generated, APPTAG check ...passed 00:07:34.909 Test: verify: DIF generated, REFTAG check ...passed 00:07:34.909 Test: verify: DIF not generated, GUARD check ...passed 00:07:34.909 Test: verify: DIF not generated, APPTAG check ...passed 00:07:34.909 Test: verify: DIF not generated, REFTAG check ...[2024-07-23 00:11:49.522177] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:34.909 [2024-07-23 00:11:49.522370] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:34.909 [2024-07-23 00:11:49.522440] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:34.909 passed 00:07:34.909 Test: verify: APPTAG correct, APPTAG check ...passed 00:07:34.909 Test: verify: APPTAG incorrect, APPTAG check ...passed 00:07:34.909 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:07:34.909 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:07:34.909 Test: verify: REFTAG_INIT correct, REFTAG check ...[2024-07-23 00:11:49.522724] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:07:34.909 passed 00:07:34.909 Test: verify: REFTAG_INIT incorrect, REFTAG check ...passed 00:07:34.909 Test: verify copy: DIF generated, GUARD check ...passed 00:07:34.909 Test: verify copy: DIF generated, APPTAG check ...passed 00:07:34.909 Test: verify copy: DIF generated, REFTAG check ...[2024-07-23 00:11:49.522959] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:07:34.909 passed 00:07:34.909 Test: verify copy: DIF not generated, GUARD check ...passed 00:07:34.909 Test: verify copy: DIF not generated, APPTAG check ...passed 00:07:34.909 Test: verify copy: DIF not generated, REFTAG check ...passed 00:07:34.909 Test: generate copy: DIF generated, GUARD check ...passed 00:07:34.909 Test: generate copy: DIF generated, APTTAG check ...[2024-07-23 00:11:49.523394] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:34.909 [2024-07-23 00:11:49.523576] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:34.909 [2024-07-23 00:11:49.523637] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:34.909 passed 00:07:34.909 Test: generate copy: DIF generated, REFTAG check ...passed 00:07:34.909 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:07:34.909 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:07:34.909 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:07:34.909 Test: generate copy: iovecs-len validate ...passed 00:07:34.909 Test: generate copy: buffer alignment validate ...[2024-07-23 00:11:49.524208] dif.c:1190:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:07:34.909 passed 00:07:34.909 00:07:34.909 Run Summary: Type Total Ran Passed Failed Inactive 00:07:34.909 suites 1 1 n/a 0 0 00:07:34.909 tests 26 26 26 0 0 00:07:34.909 asserts 115 115 115 0 n/a 00:07:34.909 00:07:34.909 Elapsed time = 0.007 seconds 00:07:35.168 ************************************ 00:07:35.168 END TEST accel_dif_functional_tests 00:07:35.168 ************************************ 00:07:35.168 00:07:35.168 real 0m0.575s 00:07:35.168 user 0m0.643s 00:07:35.168 sys 0m0.211s 00:07:35.168 00:11:49 accel.accel_dif_functional_tests -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:35.168 00:11:49 accel.accel_dif_functional_tests -- common/autotest_common.sh@10 -- # set +x 00:07:35.168 00:07:35.168 real 0m34.147s 00:07:35.168 user 0m34.383s 00:07:35.168 sys 0m5.289s 00:07:35.168 00:11:49 accel -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:35.168 00:11:49 accel -- common/autotest_common.sh@10 -- # set +x 00:07:35.168 ************************************ 00:07:35.168 END TEST accel 00:07:35.168 ************************************ 00:07:35.426 00:11:49 -- spdk/autotest.sh@184 -- # run_test accel_rpc /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:07:35.426 00:11:49 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:35.426 00:11:49 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:35.427 00:11:49 -- common/autotest_common.sh@10 -- # set +x 00:07:35.427 ************************************ 00:07:35.427 START TEST accel_rpc 00:07:35.427 ************************************ 00:07:35.427 00:11:49 accel_rpc -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:07:35.427 * Looking for test storage... 00:07:35.427 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:07:35.427 00:11:50 accel_rpc -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:35.427 00:11:50 accel_rpc -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=76958 00:07:35.427 00:11:50 accel_rpc -- accel/accel_rpc.sh@15 -- # waitforlisten 76958 00:07:35.427 00:11:50 accel_rpc -- accel/accel_rpc.sh@13 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:07:35.427 00:11:50 accel_rpc -- common/autotest_common.sh@827 -- # '[' -z 76958 ']' 00:07:35.427 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:35.427 00:11:50 accel_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:35.427 00:11:50 accel_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:35.427 00:11:50 accel_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:35.427 00:11:50 accel_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:35.427 00:11:50 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:35.686 [2024-07-23 00:11:50.111063] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:07:35.686 [2024-07-23 00:11:50.111196] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76958 ] 00:07:35.686 [2024-07-23 00:11:50.262159] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:35.686 [2024-07-23 00:11:50.304309] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:36.253 00:11:50 accel_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:36.253 00:11:50 accel_rpc -- common/autotest_common.sh@860 -- # return 0 00:07:36.253 00:11:50 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:07:36.253 00:11:50 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:07:36.253 00:11:50 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:07:36.253 00:11:50 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:07:36.253 00:11:50 accel_rpc -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:07:36.253 00:11:50 accel_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:36.253 00:11:50 accel_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:36.253 00:11:50 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:36.253 ************************************ 00:07:36.253 START TEST accel_assign_opcode 00:07:36.253 ************************************ 00:07:36.253 00:11:50 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1121 -- # accel_assign_opcode_test_suite 00:07:36.253 00:11:50 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:07:36.253 00:11:50 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:36.253 00:11:50 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:36.253 [2024-07-23 00:11:50.908206] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:07:36.253 00:11:50 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:36.253 00:11:50 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:07:36.253 00:11:50 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:36.253 00:11:50 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:36.253 [2024-07-23 00:11:50.920297] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:07:36.253 00:11:50 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:36.253 00:11:50 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:07:36.253 00:11:50 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:36.253 00:11:50 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:36.511 00:11:51 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:36.511 00:11:51 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:07:36.511 00:11:51 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:36.511 00:11:51 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:36.511 00:11:51 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:07:36.511 00:11:51 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # grep software 00:07:36.511 00:11:51 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:36.511 software 00:07:36.511 ************************************ 00:07:36.511 END TEST accel_assign_opcode 00:07:36.511 ************************************ 00:07:36.511 00:07:36.511 real 0m0.246s 00:07:36.511 user 0m0.042s 00:07:36.511 sys 0m0.020s 00:07:36.511 00:11:51 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:36.511 00:11:51 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:36.769 00:11:51 accel_rpc -- accel/accel_rpc.sh@55 -- # killprocess 76958 00:07:36.769 00:11:51 accel_rpc -- common/autotest_common.sh@946 -- # '[' -z 76958 ']' 00:07:36.769 00:11:51 accel_rpc -- common/autotest_common.sh@950 -- # kill -0 76958 00:07:36.769 00:11:51 accel_rpc -- common/autotest_common.sh@951 -- # uname 00:07:36.769 00:11:51 accel_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:36.769 00:11:51 accel_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 76958 00:07:36.769 killing process with pid 76958 00:07:36.769 00:11:51 accel_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:36.769 00:11:51 accel_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:36.769 00:11:51 accel_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 76958' 00:07:36.769 00:11:51 accel_rpc -- common/autotest_common.sh@965 -- # kill 76958 00:07:36.769 00:11:51 accel_rpc -- common/autotest_common.sh@970 -- # wait 76958 00:07:37.028 00:07:37.028 real 0m1.734s 00:07:37.028 user 0m1.651s 00:07:37.028 sys 0m0.501s 00:07:37.028 00:11:51 accel_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:37.028 ************************************ 00:07:37.028 END TEST accel_rpc 00:07:37.028 ************************************ 00:07:37.028 00:11:51 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:37.028 00:11:51 -- spdk/autotest.sh@185 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:07:37.028 00:11:51 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:37.028 00:11:51 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:37.028 00:11:51 -- common/autotest_common.sh@10 -- # set +x 00:07:37.028 ************************************ 00:07:37.028 START TEST app_cmdline 00:07:37.028 ************************************ 00:07:37.028 00:11:51 app_cmdline -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:07:37.287 * Looking for test storage... 00:07:37.287 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:07:37.287 00:11:51 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:07:37.287 00:11:51 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=77041 00:07:37.287 00:11:51 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:07:37.287 00:11:51 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 77041 00:07:37.287 00:11:51 app_cmdline -- common/autotest_common.sh@827 -- # '[' -z 77041 ']' 00:07:37.287 00:11:51 app_cmdline -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:37.287 00:11:51 app_cmdline -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:37.287 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:37.287 00:11:51 app_cmdline -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:37.287 00:11:51 app_cmdline -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:37.287 00:11:51 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:37.287 [2024-07-23 00:11:51.911235] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:07:37.287 [2024-07-23 00:11:51.911403] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77041 ] 00:07:37.546 [2024-07-23 00:11:52.060293] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:37.546 [2024-07-23 00:11:52.103298] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:38.114 00:11:52 app_cmdline -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:38.114 00:11:52 app_cmdline -- common/autotest_common.sh@860 -- # return 0 00:07:38.114 00:11:52 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:07:38.373 { 00:07:38.373 "version": "SPDK v24.05.1-pre git sha1 5fa2f5086", 00:07:38.373 "fields": { 00:07:38.373 "major": 24, 00:07:38.373 "minor": 5, 00:07:38.373 "patch": 1, 00:07:38.373 "suffix": "-pre", 00:07:38.373 "commit": "5fa2f5086" 00:07:38.373 } 00:07:38.373 } 00:07:38.373 00:11:52 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:07:38.373 00:11:52 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:07:38.373 00:11:52 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:07:38.373 00:11:52 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:07:38.373 00:11:52 app_cmdline -- app/cmdline.sh@26 -- # sort 00:07:38.373 00:11:52 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:07:38.373 00:11:52 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:07:38.373 00:11:52 app_cmdline -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:38.373 00:11:52 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:38.373 00:11:52 app_cmdline -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:38.373 00:11:52 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:07:38.373 00:11:52 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:07:38.373 00:11:52 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:38.373 00:11:52 app_cmdline -- common/autotest_common.sh@648 -- # local es=0 00:07:38.373 00:11:52 app_cmdline -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:38.373 00:11:52 app_cmdline -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:38.373 00:11:52 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:38.373 00:11:52 app_cmdline -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:38.373 00:11:52 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:38.373 00:11:52 app_cmdline -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:38.373 00:11:52 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:38.373 00:11:52 app_cmdline -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:38.373 00:11:52 app_cmdline -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:07:38.373 00:11:52 app_cmdline -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:38.632 request: 00:07:38.632 { 00:07:38.632 "method": "env_dpdk_get_mem_stats", 00:07:38.632 "req_id": 1 00:07:38.632 } 00:07:38.632 Got JSON-RPC error response 00:07:38.632 response: 00:07:38.632 { 00:07:38.632 "code": -32601, 00:07:38.632 "message": "Method not found" 00:07:38.632 } 00:07:38.632 00:11:53 app_cmdline -- common/autotest_common.sh@651 -- # es=1 00:07:38.632 00:11:53 app_cmdline -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:38.632 00:11:53 app_cmdline -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:38.632 00:11:53 app_cmdline -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:38.632 00:11:53 app_cmdline -- app/cmdline.sh@1 -- # killprocess 77041 00:07:38.632 00:11:53 app_cmdline -- common/autotest_common.sh@946 -- # '[' -z 77041 ']' 00:07:38.632 00:11:53 app_cmdline -- common/autotest_common.sh@950 -- # kill -0 77041 00:07:38.632 00:11:53 app_cmdline -- common/autotest_common.sh@951 -- # uname 00:07:38.632 00:11:53 app_cmdline -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:38.632 00:11:53 app_cmdline -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 77041 00:07:38.632 killing process with pid 77041 00:07:38.632 00:11:53 app_cmdline -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:38.632 00:11:53 app_cmdline -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:38.632 00:11:53 app_cmdline -- common/autotest_common.sh@964 -- # echo 'killing process with pid 77041' 00:07:38.632 00:11:53 app_cmdline -- common/autotest_common.sh@965 -- # kill 77041 00:07:38.632 00:11:53 app_cmdline -- common/autotest_common.sh@970 -- # wait 77041 00:07:38.889 00:07:38.889 real 0m1.813s 00:07:38.889 user 0m1.975s 00:07:38.889 sys 0m0.531s 00:07:38.889 ************************************ 00:07:38.889 END TEST app_cmdline 00:07:38.889 ************************************ 00:07:38.889 00:11:53 app_cmdline -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:38.889 00:11:53 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:38.889 00:11:53 -- spdk/autotest.sh@186 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:07:38.889 00:11:53 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:38.889 00:11:53 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:38.889 00:11:53 -- common/autotest_common.sh@10 -- # set +x 00:07:39.150 ************************************ 00:07:39.150 START TEST version 00:07:39.150 ************************************ 00:07:39.150 00:11:53 version -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:07:39.150 * Looking for test storage... 00:07:39.150 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:07:39.150 00:11:53 version -- app/version.sh@17 -- # get_header_version major 00:07:39.150 00:11:53 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:39.150 00:11:53 version -- app/version.sh@14 -- # cut -f2 00:07:39.150 00:11:53 version -- app/version.sh@14 -- # tr -d '"' 00:07:39.150 00:11:53 version -- app/version.sh@17 -- # major=24 00:07:39.150 00:11:53 version -- app/version.sh@18 -- # get_header_version minor 00:07:39.150 00:11:53 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:39.150 00:11:53 version -- app/version.sh@14 -- # cut -f2 00:07:39.150 00:11:53 version -- app/version.sh@14 -- # tr -d '"' 00:07:39.150 00:11:53 version -- app/version.sh@18 -- # minor=5 00:07:39.150 00:11:53 version -- app/version.sh@19 -- # get_header_version patch 00:07:39.150 00:11:53 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:39.150 00:11:53 version -- app/version.sh@14 -- # cut -f2 00:07:39.150 00:11:53 version -- app/version.sh@14 -- # tr -d '"' 00:07:39.150 00:11:53 version -- app/version.sh@19 -- # patch=1 00:07:39.150 00:11:53 version -- app/version.sh@20 -- # get_header_version suffix 00:07:39.150 00:11:53 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:39.150 00:11:53 version -- app/version.sh@14 -- # cut -f2 00:07:39.150 00:11:53 version -- app/version.sh@14 -- # tr -d '"' 00:07:39.150 00:11:53 version -- app/version.sh@20 -- # suffix=-pre 00:07:39.150 00:11:53 version -- app/version.sh@22 -- # version=24.5 00:07:39.150 00:11:53 version -- app/version.sh@25 -- # (( patch != 0 )) 00:07:39.150 00:11:53 version -- app/version.sh@25 -- # version=24.5.1 00:07:39.150 00:11:53 version -- app/version.sh@28 -- # version=24.5.1rc0 00:07:39.150 00:11:53 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:07:39.150 00:11:53 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:07:39.150 00:11:53 version -- app/version.sh@30 -- # py_version=24.5.1rc0 00:07:39.150 00:11:53 version -- app/version.sh@31 -- # [[ 24.5.1rc0 == \2\4\.\5\.\1\r\c\0 ]] 00:07:39.150 00:07:39.150 real 0m0.225s 00:07:39.150 user 0m0.131s 00:07:39.150 sys 0m0.145s 00:07:39.150 ************************************ 00:07:39.150 END TEST version 00:07:39.150 ************************************ 00:07:39.150 00:11:53 version -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:39.150 00:11:53 version -- common/autotest_common.sh@10 -- # set +x 00:07:39.408 00:11:53 -- spdk/autotest.sh@188 -- # '[' 0 -eq 1 ']' 00:07:39.408 00:11:53 -- spdk/autotest.sh@198 -- # uname -s 00:07:39.408 00:11:53 -- spdk/autotest.sh@198 -- # [[ Linux == Linux ]] 00:07:39.408 00:11:53 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:07:39.408 00:11:53 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:07:39.408 00:11:53 -- spdk/autotest.sh@211 -- # '[' 1 -eq 1 ']' 00:07:39.408 00:11:53 -- spdk/autotest.sh@212 -- # run_test blockdev_nvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:07:39.408 00:11:53 -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:07:39.408 00:11:53 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:39.408 00:11:53 -- common/autotest_common.sh@10 -- # set +x 00:07:39.408 ************************************ 00:07:39.408 START TEST blockdev_nvme 00:07:39.408 ************************************ 00:07:39.408 00:11:53 blockdev_nvme -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:07:39.408 * Looking for test storage... 00:07:39.408 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:07:39.408 00:11:54 blockdev_nvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:07:39.408 00:11:54 blockdev_nvme -- bdev/nbd_common.sh@6 -- # set -e 00:07:39.408 00:11:54 blockdev_nvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:07:39.408 00:11:54 blockdev_nvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:39.408 00:11:54 blockdev_nvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:07:39.408 00:11:54 blockdev_nvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:07:39.408 00:11:54 blockdev_nvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:07:39.408 00:11:54 blockdev_nvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:07:39.408 00:11:54 blockdev_nvme -- bdev/blockdev.sh@20 -- # : 00:07:39.408 00:11:54 blockdev_nvme -- bdev/blockdev.sh@670 -- # QOS_DEV_1=Malloc_0 00:07:39.408 00:11:54 blockdev_nvme -- bdev/blockdev.sh@671 -- # QOS_DEV_2=Null_1 00:07:39.408 00:11:54 blockdev_nvme -- bdev/blockdev.sh@672 -- # QOS_RUN_TIME=5 00:07:39.408 00:11:54 blockdev_nvme -- bdev/blockdev.sh@674 -- # uname -s 00:07:39.408 00:11:54 blockdev_nvme -- bdev/blockdev.sh@674 -- # '[' Linux = Linux ']' 00:07:39.408 00:11:54 blockdev_nvme -- bdev/blockdev.sh@676 -- # PRE_RESERVED_MEM=0 00:07:39.408 00:11:54 blockdev_nvme -- bdev/blockdev.sh@682 -- # test_type=nvme 00:07:39.408 00:11:54 blockdev_nvme -- bdev/blockdev.sh@683 -- # crypto_device= 00:07:39.408 00:11:54 blockdev_nvme -- bdev/blockdev.sh@684 -- # dek= 00:07:39.408 00:11:54 blockdev_nvme -- bdev/blockdev.sh@685 -- # env_ctx= 00:07:39.408 00:11:54 blockdev_nvme -- bdev/blockdev.sh@686 -- # wait_for_rpc= 00:07:39.408 00:11:54 blockdev_nvme -- bdev/blockdev.sh@687 -- # '[' -n '' ']' 00:07:39.408 00:11:54 blockdev_nvme -- bdev/blockdev.sh@690 -- # [[ nvme == bdev ]] 00:07:39.408 00:11:54 blockdev_nvme -- bdev/blockdev.sh@690 -- # [[ nvme == crypto_* ]] 00:07:39.408 00:11:54 blockdev_nvme -- bdev/blockdev.sh@693 -- # start_spdk_tgt 00:07:39.408 00:11:54 blockdev_nvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=77191 00:07:39.408 00:11:54 blockdev_nvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:07:39.408 00:11:54 blockdev_nvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:07:39.408 00:11:54 blockdev_nvme -- bdev/blockdev.sh@49 -- # waitforlisten 77191 00:07:39.408 00:11:54 blockdev_nvme -- common/autotest_common.sh@827 -- # '[' -z 77191 ']' 00:07:39.408 00:11:54 blockdev_nvme -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:39.408 00:11:54 blockdev_nvme -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:39.408 00:11:54 blockdev_nvme -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:39.408 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:39.408 00:11:54 blockdev_nvme -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:39.408 00:11:54 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:39.667 [2024-07-23 00:11:54.127493] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:07:39.667 [2024-07-23 00:11:54.127770] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77191 ] 00:07:39.667 [2024-07-23 00:11:54.277039] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:39.667 [2024-07-23 00:11:54.318635] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:40.234 00:11:54 blockdev_nvme -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:40.234 00:11:54 blockdev_nvme -- common/autotest_common.sh@860 -- # return 0 00:07:40.234 00:11:54 blockdev_nvme -- bdev/blockdev.sh@694 -- # case "$test_type" in 00:07:40.234 00:11:54 blockdev_nvme -- bdev/blockdev.sh@699 -- # setup_nvme_conf 00:07:40.234 00:11:54 blockdev_nvme -- bdev/blockdev.sh@81 -- # local json 00:07:40.234 00:11:54 blockdev_nvme -- bdev/blockdev.sh@82 -- # mapfile -t json 00:07:40.234 00:11:54 blockdev_nvme -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:07:40.492 00:11:55 blockdev_nvme -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:07:40.492 00:11:55 blockdev_nvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:40.492 00:11:55 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:40.751 00:11:55 blockdev_nvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:40.751 00:11:55 blockdev_nvme -- bdev/blockdev.sh@737 -- # rpc_cmd bdev_wait_for_examine 00:07:40.751 00:11:55 blockdev_nvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:40.751 00:11:55 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:40.751 00:11:55 blockdev_nvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:40.751 00:11:55 blockdev_nvme -- bdev/blockdev.sh@740 -- # cat 00:07:40.751 00:11:55 blockdev_nvme -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n accel 00:07:40.751 00:11:55 blockdev_nvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:40.751 00:11:55 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:40.751 00:11:55 blockdev_nvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:40.751 00:11:55 blockdev_nvme -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n bdev 00:07:40.751 00:11:55 blockdev_nvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:40.751 00:11:55 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:40.751 00:11:55 blockdev_nvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:40.751 00:11:55 blockdev_nvme -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n iobuf 00:07:40.751 00:11:55 blockdev_nvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:40.751 00:11:55 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:40.751 00:11:55 blockdev_nvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:40.751 00:11:55 blockdev_nvme -- bdev/blockdev.sh@748 -- # mapfile -t bdevs 00:07:40.751 00:11:55 blockdev_nvme -- bdev/blockdev.sh@748 -- # rpc_cmd bdev_get_bdevs 00:07:40.751 00:11:55 blockdev_nvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:40.751 00:11:55 blockdev_nvme -- bdev/blockdev.sh@748 -- # jq -r '.[] | select(.claimed == false)' 00:07:40.751 00:11:55 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:41.010 00:11:55 blockdev_nvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:41.010 00:11:55 blockdev_nvme -- bdev/blockdev.sh@749 -- # mapfile -t bdevs_name 00:07:41.010 00:11:55 blockdev_nvme -- bdev/blockdev.sh@749 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "c8d317a3-7d69-4586-a9ba-0ed1553bdedb"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "c8d317a3-7d69-4586-a9ba-0ed1553bdedb",' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1",' ' "aliases": [' ' "12dde8fc-894f-4efc-a176-bc8959ed3c4b"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "12dde8fc-894f-4efc-a176-bc8959ed3c4b",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:11.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:11.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12341",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12341",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "1777e5fb-607d-41d8-b1db-0e7ffb8721c6"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "1777e5fb-607d-41d8-b1db-0e7ffb8721c6",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' 00:11:55 blockdev_nvme -- bdev/blockdev.sh@749 -- # jq -r .name 00:07:41.010 },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "dadfdf51-e522-4545-8f23-288c28679030"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "dadfdf51-e522-4545-8f23-288c28679030",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "64cc3295-e181-4ad3-a076-41eb8db8ad7d"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "64cc3295-e181-4ad3-a076-41eb8db8ad7d",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "892409d6-2188-44a5-b223-cf640ef5fccb"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "892409d6-2188-44a5-b223-cf640ef5fccb",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:07:41.010 00:11:55 blockdev_nvme -- bdev/blockdev.sh@750 -- # bdev_list=("${bdevs_name[@]}") 00:07:41.010 00:11:55 blockdev_nvme -- bdev/blockdev.sh@752 -- # hello_world_bdev=Nvme0n1 00:07:41.010 00:11:55 blockdev_nvme -- bdev/blockdev.sh@753 -- # trap - SIGINT SIGTERM EXIT 00:07:41.010 00:11:55 blockdev_nvme -- bdev/blockdev.sh@754 -- # killprocess 77191 00:07:41.010 00:11:55 blockdev_nvme -- common/autotest_common.sh@946 -- # '[' -z 77191 ']' 00:07:41.010 00:11:55 blockdev_nvme -- common/autotest_common.sh@950 -- # kill -0 77191 00:07:41.010 00:11:55 blockdev_nvme -- common/autotest_common.sh@951 -- # uname 00:07:41.010 00:11:55 blockdev_nvme -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:41.010 00:11:55 blockdev_nvme -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 77191 00:07:41.010 killing process with pid 77191 00:07:41.010 00:11:55 blockdev_nvme -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:41.010 00:11:55 blockdev_nvme -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:41.010 00:11:55 blockdev_nvme -- common/autotest_common.sh@964 -- # echo 'killing process with pid 77191' 00:07:41.010 00:11:55 blockdev_nvme -- common/autotest_common.sh@965 -- # kill 77191 00:07:41.010 00:11:55 blockdev_nvme -- common/autotest_common.sh@970 -- # wait 77191 00:07:41.269 00:11:55 blockdev_nvme -- bdev/blockdev.sh@758 -- # trap cleanup SIGINT SIGTERM EXIT 00:07:41.269 00:11:55 blockdev_nvme -- bdev/blockdev.sh@760 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:07:41.269 00:11:55 blockdev_nvme -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:07:41.269 00:11:55 blockdev_nvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:41.269 00:11:55 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:41.269 ************************************ 00:07:41.269 START TEST bdev_hello_world 00:07:41.269 ************************************ 00:07:41.269 00:11:55 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:07:41.527 [2024-07-23 00:11:56.033205] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:07:41.527 [2024-07-23 00:11:56.033402] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77259 ] 00:07:41.527 [2024-07-23 00:11:56.187399] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:41.786 [2024-07-23 00:11:56.232404] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:42.044 [2024-07-23 00:11:56.602966] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:07:42.044 [2024-07-23 00:11:56.603020] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:07:42.044 [2024-07-23 00:11:56.603040] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:07:42.044 [2024-07-23 00:11:56.605239] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:07:42.044 [2024-07-23 00:11:56.605773] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:07:42.044 [2024-07-23 00:11:56.605812] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:07:42.044 [2024-07-23 00:11:56.606045] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:07:42.044 00:07:42.044 [2024-07-23 00:11:56.606075] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:07:42.303 00:07:42.303 real 0m0.888s 00:07:42.303 ************************************ 00:07:42.303 END TEST bdev_hello_world 00:07:42.303 ************************************ 00:07:42.303 user 0m0.565s 00:07:42.303 sys 0m0.221s 00:07:42.303 00:11:56 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:42.303 00:11:56 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:07:42.303 00:11:56 blockdev_nvme -- bdev/blockdev.sh@761 -- # run_test bdev_bounds bdev_bounds '' 00:07:42.303 00:11:56 blockdev_nvme -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:07:42.303 00:11:56 blockdev_nvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:42.303 00:11:56 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:42.303 ************************************ 00:07:42.303 START TEST bdev_bounds 00:07:42.303 ************************************ 00:07:42.303 00:11:56 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1121 -- # bdev_bounds '' 00:07:42.303 Process bdevio pid: 77290 00:07:42.303 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:42.303 00:11:56 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@290 -- # bdevio_pid=77290 00:07:42.303 00:11:56 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@289 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:07:42.303 00:11:56 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@291 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:07:42.303 00:11:56 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@292 -- # echo 'Process bdevio pid: 77290' 00:07:42.303 00:11:56 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@293 -- # waitforlisten 77290 00:07:42.303 00:11:56 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@827 -- # '[' -z 77290 ']' 00:07:42.303 00:11:56 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:42.303 00:11:56 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:42.303 00:11:56 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:42.303 00:11:56 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:42.303 00:11:56 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:07:42.562 [2024-07-23 00:11:56.992952] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:07:42.562 [2024-07-23 00:11:56.993713] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77290 ] 00:07:42.562 [2024-07-23 00:11:57.144317] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:42.562 [2024-07-23 00:11:57.191985] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:42.562 [2024-07-23 00:11:57.192013] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:42.562 [2024-07-23 00:11:57.192125] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:43.499 00:11:57 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:43.499 00:11:57 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@860 -- # return 0 00:07:43.499 00:11:57 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@294 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:07:43.499 I/O targets: 00:07:43.499 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:07:43.499 Nvme1n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:07:43.499 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:07:43.499 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:07:43.499 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:07:43.499 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:07:43.499 00:07:43.499 00:07:43.499 CUnit - A unit testing framework for C - Version 2.1-3 00:07:43.499 http://cunit.sourceforge.net/ 00:07:43.499 00:07:43.499 00:07:43.499 Suite: bdevio tests on: Nvme3n1 00:07:43.499 Test: blockdev write read block ...passed 00:07:43.499 Test: blockdev write zeroes read block ...passed 00:07:43.499 Test: blockdev write zeroes read no split ...passed 00:07:43.499 Test: blockdev write zeroes read split ...passed 00:07:43.499 Test: blockdev write zeroes read split partial ...passed 00:07:43.499 Test: blockdev reset ...[2024-07-23 00:11:57.930182] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0] resetting controller 00:07:43.499 passed 00:07:43.499 Test: blockdev write read 8 blocks ...[2024-07-23 00:11:57.932095] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:43.499 passed 00:07:43.499 Test: blockdev write read size > 128k ...passed 00:07:43.499 Test: blockdev write read invalid size ...passed 00:07:43.499 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:43.499 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:43.499 Test: blockdev write read max offset ...passed 00:07:43.499 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:43.499 Test: blockdev writev readv 8 blocks ...passed 00:07:43.499 Test: blockdev writev readv 30 x 1block ...passed 00:07:43.499 Test: blockdev writev readv block ...passed 00:07:43.499 Test: blockdev writev readv size > 128k ...passed 00:07:43.499 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:43.499 Test: blockdev comparev and writev ...[2024-07-23 00:11:57.940288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2ad20e000 len:0x1000 00:07:43.499 [2024-07-23 00:11:57.940351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:43.499 passed 00:07:43.499 Test: blockdev nvme passthru rw ...passed 00:07:43.499 Test: blockdev nvme passthru vendor specific ...[2024-07-23 00:11:57.941457] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:43.499 [2024-07-23 00:11:57.941512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:43.499 passed 00:07:43.499 Test: blockdev nvme admin passthru ...passed 00:07:43.499 Test: blockdev copy ...passed 00:07:43.499 Suite: bdevio tests on: Nvme2n3 00:07:43.499 Test: blockdev write read block ...passed 00:07:43.499 Test: blockdev write zeroes read block ...passed 00:07:43.499 Test: blockdev write zeroes read no split ...passed 00:07:43.499 Test: blockdev write zeroes read split ...passed 00:07:43.499 Test: blockdev write zeroes read split partial ...passed 00:07:43.499 Test: blockdev reset ...[2024-07-23 00:11:57.969560] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:07:43.499 [2024-07-23 00:11:57.971749] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:43.499 passed 00:07:43.499 Test: blockdev write read 8 blocks ...passed 00:07:43.499 Test: blockdev write read size > 128k ...passed 00:07:43.499 Test: blockdev write read invalid size ...passed 00:07:43.499 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:43.499 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:43.499 Test: blockdev write read max offset ...passed 00:07:43.499 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:43.499 Test: blockdev writev readv 8 blocks ...passed 00:07:43.499 Test: blockdev writev readv 30 x 1block ...passed 00:07:43.499 Test: blockdev writev readv block ...passed 00:07:43.499 Test: blockdev writev readv size > 128k ...passed 00:07:43.499 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:43.499 Test: blockdev comparev and writev ...[2024-07-23 00:11:57.979959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2ad20a000 len:0x1000 00:07:43.499 [2024-07-23 00:11:57.980009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:43.499 passed 00:07:43.499 Test: blockdev nvme passthru rw ...passed 00:07:43.499 Test: blockdev nvme passthru vendor specific ...[2024-07-23 00:11:57.980944] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:43.499 [2024-07-23 00:11:57.980983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:43.499 passed 00:07:43.499 Test: blockdev nvme admin passthru ...passed 00:07:43.499 Test: blockdev copy ...passed 00:07:43.499 Suite: bdevio tests on: Nvme2n2 00:07:43.499 Test: blockdev write read block ...passed 00:07:43.499 Test: blockdev write zeroes read block ...passed 00:07:43.499 Test: blockdev write zeroes read no split ...passed 00:07:43.499 Test: blockdev write zeroes read split ...passed 00:07:43.499 Test: blockdev write zeroes read split partial ...passed 00:07:43.499 Test: blockdev reset ...[2024-07-23 00:11:58.008905] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:07:43.499 [2024-07-23 00:11:58.011035] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:43.499 passed 00:07:43.499 Test: blockdev write read 8 blocks ...passed 00:07:43.499 Test: blockdev write read size > 128k ...passed 00:07:43.499 Test: blockdev write read invalid size ...passed 00:07:43.499 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:43.499 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:43.499 Test: blockdev write read max offset ...passed 00:07:43.499 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:43.499 Test: blockdev writev readv 8 blocks ...passed 00:07:43.499 Test: blockdev writev readv 30 x 1block ...passed 00:07:43.499 Test: blockdev writev readv block ...passed 00:07:43.499 Test: blockdev writev readv size > 128k ...passed 00:07:43.499 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:43.499 Test: blockdev comparev and writev ...[2024-07-23 00:11:58.019047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 passed 00:07:43.499 Test: blockdev nvme passthru rw ...SGL DATA BLOCK ADDRESS 0x2ad206000 len:0x1000 00:07:43.499 [2024-07-23 00:11:58.019202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:43.499 passed 00:07:43.499 Test: blockdev nvme passthru vendor specific ...[2024-07-23 00:11:58.020176] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:43.499 passed 00:07:43.499 Test: blockdev nvme admin passthru ...[2024-07-23 00:11:58.020218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:43.499 passed 00:07:43.499 Test: blockdev copy ...passed 00:07:43.500 Suite: bdevio tests on: Nvme2n1 00:07:43.500 Test: blockdev write read block ...passed 00:07:43.500 Test: blockdev write zeroes read block ...passed 00:07:43.500 Test: blockdev write zeroes read no split ...passed 00:07:43.500 Test: blockdev write zeroes read split ...passed 00:07:43.500 Test: blockdev write zeroes read split partial ...passed 00:07:43.500 Test: blockdev reset ...[2024-07-23 00:11:58.049004] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:07:43.500 [2024-07-23 00:11:58.051118] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:43.500 passed 00:07:43.500 Test: blockdev write read 8 blocks ...passed 00:07:43.500 Test: blockdev write read size > 128k ...passed 00:07:43.500 Test: blockdev write read invalid size ...passed 00:07:43.500 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:43.500 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:43.500 Test: blockdev write read max offset ...passed 00:07:43.500 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:43.500 Test: blockdev writev readv 8 blocks ...passed 00:07:43.500 Test: blockdev writev readv 30 x 1block ...passed 00:07:43.500 Test: blockdev writev readv block ...passed 00:07:43.500 Test: blockdev writev readv size > 128k ...passed 00:07:43.500 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:43.500 Test: blockdev comparev and writev ...[2024-07-23 00:11:58.060818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2ad202000 len:0x1000 00:07:43.500 [2024-07-23 00:11:58.060886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:43.500 passed 00:07:43.500 Test: blockdev nvme passthru rw ...passed 00:07:43.500 Test: blockdev nvme passthru vendor specific ...[2024-07-23 00:11:58.061889] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:43.500 [2024-07-23 00:11:58.061939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:43.500 passed 00:07:43.500 Test: blockdev nvme admin passthru ...passed 00:07:43.500 Test: blockdev copy ...passed 00:07:43.500 Suite: bdevio tests on: Nvme1n1 00:07:43.500 Test: blockdev write read block ...passed 00:07:43.500 Test: blockdev write zeroes read block ...passed 00:07:43.500 Test: blockdev write zeroes read no split ...passed 00:07:43.500 Test: blockdev write zeroes read split ...passed 00:07:43.500 Test: blockdev write zeroes read split partial ...passed 00:07:43.500 Test: blockdev reset ...[2024-07-23 00:11:58.089256] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0] resetting controller 00:07:43.500 [2024-07-23 00:11:58.091394] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:43.500 passed 00:07:43.500 Test: blockdev write read 8 blocks ...passed 00:07:43.500 Test: blockdev write read size > 128k ...passed 00:07:43.500 Test: blockdev write read invalid size ...passed 00:07:43.500 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:43.500 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:43.500 Test: blockdev write read max offset ...passed 00:07:43.500 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:43.500 Test: blockdev writev readv 8 blocks ...passed 00:07:43.500 Test: blockdev writev readv 30 x 1block ...passed 00:07:43.500 Test: blockdev writev readv block ...passed 00:07:43.500 Test: blockdev writev readv size > 128k ...passed 00:07:43.500 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:43.500 Test: blockdev comparev and writev ...[2024-07-23 00:11:58.100416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2b900e000 len:0x1000 00:07:43.500 [2024-07-23 00:11:58.100486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:43.500 passed 00:07:43.500 Test: blockdev nvme passthru rw ...passed 00:07:43.500 Test: blockdev nvme passthru vendor specific ...[2024-07-23 00:11:58.101639] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:43.500 [2024-07-23 00:11:58.101697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:43.500 passed 00:07:43.500 Test: blockdev nvme admin passthru ...passed 00:07:43.500 Test: blockdev copy ...passed 00:07:43.500 Suite: bdevio tests on: Nvme0n1 00:07:43.500 Test: blockdev write read block ...passed 00:07:43.500 Test: blockdev write zeroes read block ...passed 00:07:43.500 Test: blockdev write zeroes read no split ...passed 00:07:43.500 Test: blockdev write zeroes read split ...passed 00:07:43.500 Test: blockdev write zeroes read split partial ...passed 00:07:43.500 Test: blockdev reset ...[2024-07-23 00:11:58.129733] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0] resetting controller 00:07:43.500 [2024-07-23 00:11:58.131823] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:43.500 passed 00:07:43.500 Test: blockdev write read 8 blocks ...passed 00:07:43.500 Test: blockdev write read size > 128k ...passed 00:07:43.500 Test: blockdev write read invalid size ...passed 00:07:43.500 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:43.500 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:43.500 Test: blockdev write read max offset ...passed 00:07:43.500 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:43.500 Test: blockdev writev readv 8 blocks ...passed 00:07:43.500 Test: blockdev writev readv 30 x 1block ...passed 00:07:43.500 Test: blockdev writev readv block ...passed 00:07:43.500 Test: blockdev writev readv size > 128k ...passed 00:07:43.500 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:43.500 Test: blockdev comparev and writev ...passed 00:07:43.500 Test: blockdev nvme passthru rw ...[2024-07-23 00:11:58.139588] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:07:43.500 separate metadata which is not supported yet. 00:07:43.500 passed 00:07:43.500 Test: blockdev nvme passthru vendor specific ...[2024-07-23 00:11:58.140238] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:07:43.500 [2024-07-23 00:11:58.140292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:07:43.500 passed 00:07:43.500 Test: blockdev nvme admin passthru ...passed 00:07:43.500 Test: blockdev copy ...passed 00:07:43.500 00:07:43.500 Run Summary: Type Total Ran Passed Failed Inactive 00:07:43.500 suites 6 6 n/a 0 0 00:07:43.500 tests 138 138 138 0 0 00:07:43.500 asserts 893 893 893 0 n/a 00:07:43.500 00:07:43.500 Elapsed time = 0.545 seconds 00:07:43.500 0 00:07:43.500 00:11:58 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@295 -- # killprocess 77290 00:07:43.500 00:11:58 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@946 -- # '[' -z 77290 ']' 00:07:43.500 00:11:58 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@950 -- # kill -0 77290 00:07:43.500 00:11:58 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@951 -- # uname 00:07:43.759 00:11:58 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:43.759 00:11:58 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 77290 00:07:43.759 00:11:58 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:43.759 00:11:58 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:43.759 00:11:58 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@964 -- # echo 'killing process with pid 77290' 00:07:43.759 killing process with pid 77290 00:07:43.759 00:11:58 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@965 -- # kill 77290 00:07:43.759 00:11:58 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@970 -- # wait 77290 00:07:43.759 00:11:58 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@296 -- # trap - SIGINT SIGTERM EXIT 00:07:43.759 00:07:43.759 real 0m1.491s 00:07:43.759 user 0m3.560s 00:07:43.759 sys 0m0.379s 00:07:43.759 00:11:58 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:43.759 00:11:58 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:07:43.759 ************************************ 00:07:43.759 END TEST bdev_bounds 00:07:43.759 ************************************ 00:07:44.018 00:11:58 blockdev_nvme -- bdev/blockdev.sh@762 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:07:44.018 00:11:58 blockdev_nvme -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:07:44.018 00:11:58 blockdev_nvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:44.018 00:11:58 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:44.018 ************************************ 00:07:44.018 START TEST bdev_nbd 00:07:44.018 ************************************ 00:07:44.018 00:11:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1121 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:07:44.018 00:11:58 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@300 -- # uname -s 00:07:44.018 00:11:58 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@300 -- # [[ Linux == Linux ]] 00:07:44.018 00:11:58 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:44.018 00:11:58 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:44.018 00:11:58 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@304 -- # bdev_all=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:44.018 00:11:58 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_all 00:07:44.018 00:11:58 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@305 -- # local bdev_num=6 00:07:44.018 00:11:58 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@309 -- # [[ -e /sys/module/nbd ]] 00:07:44.018 00:11:58 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@311 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:07:44.018 00:11:58 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@311 -- # local nbd_all 00:07:44.018 00:11:58 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@312 -- # bdev_num=6 00:07:44.018 00:11:58 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:07:44.018 00:11:58 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local nbd_list 00:07:44.018 00:11:58 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@315 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:44.018 00:11:58 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@315 -- # local bdev_list 00:07:44.018 00:11:58 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@318 -- # nbd_pid=77344 00:07:44.018 00:11:58 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@319 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:07:44.018 00:11:58 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@317 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:07:44.018 00:11:58 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@320 -- # waitforlisten 77344 /var/tmp/spdk-nbd.sock 00:07:44.018 00:11:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@827 -- # '[' -z 77344 ']' 00:07:44.018 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:44.018 00:11:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:44.018 00:11:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:44.018 00:11:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:44.018 00:11:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:44.018 00:11:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:07:44.018 [2024-07-23 00:11:58.578392] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:07:44.018 [2024-07-23 00:11:58.578520] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:44.277 [2024-07-23 00:11:58.723286] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:44.277 [2024-07-23 00:11:58.764951] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:44.844 00:11:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:44.844 00:11:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@860 -- # return 0 00:07:44.844 00:11:59 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:07:44.844 00:11:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:44.844 00:11:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:44.844 00:11:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:07:44.844 00:11:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:07:44.844 00:11:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:44.844 00:11:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:44.844 00:11:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:07:44.844 00:11:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:07:44.844 00:11:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:07:44.844 00:11:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:07:44.844 00:11:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:44.844 00:11:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:07:45.114 00:11:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:07:45.114 00:11:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:07:45.114 00:11:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:07:45.114 00:11:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:07:45.114 00:11:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:07:45.114 00:11:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:07:45.114 00:11:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:07:45.114 00:11:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:07:45.114 00:11:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:07:45.114 00:11:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:07:45.114 00:11:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:07:45.114 00:11:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:45.114 1+0 records in 00:07:45.114 1+0 records out 00:07:45.114 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000635169 s, 6.4 MB/s 00:07:45.114 00:11:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:45.114 00:11:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:07:45.114 00:11:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:45.114 00:11:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:07:45.114 00:11:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:07:45.114 00:11:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:45.114 00:11:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:45.114 00:11:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 00:07:45.387 00:11:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:07:45.387 00:11:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:07:45.387 00:11:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:07:45.387 00:11:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:07:45.387 00:11:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:07:45.387 00:11:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:07:45.387 00:11:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:07:45.387 00:11:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:07:45.387 00:11:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:07:45.387 00:11:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:07:45.387 00:11:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:07:45.387 00:11:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:45.387 1+0 records in 00:07:45.387 1+0 records out 00:07:45.387 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000738363 s, 5.5 MB/s 00:07:45.387 00:11:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:45.387 00:11:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:07:45.387 00:11:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:45.387 00:11:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:07:45.387 00:11:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:07:45.387 00:11:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:45.387 00:11:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:45.387 00:11:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:07:45.387 00:12:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:07:45.387 00:12:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:07:45.387 00:12:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:07:45.387 00:12:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd2 00:07:45.387 00:12:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:07:45.387 00:12:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:07:45.387 00:12:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:07:45.387 00:12:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd2 /proc/partitions 00:07:45.387 00:12:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:07:45.387 00:12:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:07:45.387 00:12:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:07:45.387 00:12:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:45.387 1+0 records in 00:07:45.387 1+0 records out 00:07:45.387 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000582011 s, 7.0 MB/s 00:07:45.387 00:12:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:45.646 00:12:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:07:45.646 00:12:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:45.646 00:12:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:07:45.646 00:12:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:07:45.646 00:12:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:45.646 00:12:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:45.646 00:12:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:07:45.646 00:12:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:07:45.646 00:12:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:07:45.646 00:12:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:07:45.646 00:12:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd3 00:07:45.646 00:12:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:07:45.646 00:12:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:07:45.646 00:12:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:07:45.646 00:12:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd3 /proc/partitions 00:07:45.646 00:12:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:07:45.646 00:12:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:07:45.646 00:12:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:07:45.646 00:12:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:45.646 1+0 records in 00:07:45.646 1+0 records out 00:07:45.646 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000703208 s, 5.8 MB/s 00:07:45.646 00:12:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:45.646 00:12:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:07:45.646 00:12:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:45.646 00:12:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:07:45.646 00:12:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:07:45.646 00:12:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:45.646 00:12:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:45.646 00:12:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:07:45.905 00:12:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:07:45.905 00:12:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:07:45.905 00:12:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:07:45.905 00:12:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd4 00:07:45.905 00:12:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:07:45.905 00:12:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:07:45.905 00:12:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:07:45.905 00:12:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd4 /proc/partitions 00:07:45.905 00:12:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:07:45.905 00:12:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:07:45.905 00:12:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:07:45.905 00:12:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:45.905 1+0 records in 00:07:45.905 1+0 records out 00:07:45.905 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000533549 s, 7.7 MB/s 00:07:45.905 00:12:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:45.905 00:12:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:07:45.905 00:12:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:45.905 00:12:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:07:45.905 00:12:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:07:45.905 00:12:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:45.905 00:12:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:45.905 00:12:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:07:46.163 00:12:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:07:46.163 00:12:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:07:46.163 00:12:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:07:46.163 00:12:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd5 00:07:46.163 00:12:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:07:46.163 00:12:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:07:46.163 00:12:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:07:46.163 00:12:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd5 /proc/partitions 00:07:46.163 00:12:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:07:46.163 00:12:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:07:46.163 00:12:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:07:46.163 00:12:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:46.163 1+0 records in 00:07:46.163 1+0 records out 00:07:46.163 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00104643 s, 3.9 MB/s 00:07:46.163 00:12:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:46.164 00:12:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:07:46.164 00:12:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:46.164 00:12:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:07:46.164 00:12:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:07:46.164 00:12:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:46.164 00:12:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:46.164 00:12:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:46.422 00:12:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:07:46.422 { 00:07:46.422 "nbd_device": "/dev/nbd0", 00:07:46.422 "bdev_name": "Nvme0n1" 00:07:46.422 }, 00:07:46.422 { 00:07:46.422 "nbd_device": "/dev/nbd1", 00:07:46.422 "bdev_name": "Nvme1n1" 00:07:46.422 }, 00:07:46.422 { 00:07:46.422 "nbd_device": "/dev/nbd2", 00:07:46.422 "bdev_name": "Nvme2n1" 00:07:46.422 }, 00:07:46.422 { 00:07:46.422 "nbd_device": "/dev/nbd3", 00:07:46.422 "bdev_name": "Nvme2n2" 00:07:46.422 }, 00:07:46.422 { 00:07:46.422 "nbd_device": "/dev/nbd4", 00:07:46.422 "bdev_name": "Nvme2n3" 00:07:46.422 }, 00:07:46.422 { 00:07:46.422 "nbd_device": "/dev/nbd5", 00:07:46.422 "bdev_name": "Nvme3n1" 00:07:46.422 } 00:07:46.422 ]' 00:07:46.422 00:12:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:07:46.422 00:12:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:07:46.422 { 00:07:46.422 "nbd_device": "/dev/nbd0", 00:07:46.422 "bdev_name": "Nvme0n1" 00:07:46.422 }, 00:07:46.422 { 00:07:46.422 "nbd_device": "/dev/nbd1", 00:07:46.422 "bdev_name": "Nvme1n1" 00:07:46.422 }, 00:07:46.422 { 00:07:46.422 "nbd_device": "/dev/nbd2", 00:07:46.422 "bdev_name": "Nvme2n1" 00:07:46.422 }, 00:07:46.422 { 00:07:46.422 "nbd_device": "/dev/nbd3", 00:07:46.422 "bdev_name": "Nvme2n2" 00:07:46.422 }, 00:07:46.422 { 00:07:46.422 "nbd_device": "/dev/nbd4", 00:07:46.422 "bdev_name": "Nvme2n3" 00:07:46.422 }, 00:07:46.422 { 00:07:46.422 "nbd_device": "/dev/nbd5", 00:07:46.422 "bdev_name": "Nvme3n1" 00:07:46.422 } 00:07:46.422 ]' 00:07:46.422 00:12:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:07:46.422 00:12:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:07:46.422 00:12:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:46.422 00:12:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:07:46.422 00:12:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:46.422 00:12:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:07:46.422 00:12:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:46.422 00:12:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:46.681 00:12:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:46.681 00:12:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:46.681 00:12:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:46.681 00:12:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:46.681 00:12:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:46.681 00:12:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:46.681 00:12:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:46.681 00:12:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:46.681 00:12:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:46.681 00:12:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:46.940 00:12:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:46.940 00:12:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:46.940 00:12:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:46.940 00:12:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:46.940 00:12:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:46.940 00:12:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:46.940 00:12:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:46.940 00:12:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:46.940 00:12:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:46.940 00:12:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:07:47.199 00:12:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:07:47.199 00:12:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:07:47.199 00:12:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:07:47.199 00:12:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:47.199 00:12:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:47.199 00:12:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:07:47.199 00:12:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:47.199 00:12:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:47.199 00:12:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:47.199 00:12:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:07:47.199 00:12:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:07:47.199 00:12:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:07:47.199 00:12:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:07:47.199 00:12:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:47.199 00:12:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:47.199 00:12:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:07:47.199 00:12:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:47.199 00:12:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:47.199 00:12:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:47.199 00:12:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:07:47.458 00:12:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:07:47.458 00:12:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:07:47.458 00:12:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:07:47.458 00:12:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:47.458 00:12:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:47.458 00:12:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:07:47.458 00:12:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:47.458 00:12:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:47.458 00:12:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:47.458 00:12:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:07:47.715 00:12:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:07:47.715 00:12:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:07:47.715 00:12:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:07:47.715 00:12:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:47.715 00:12:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:47.715 00:12:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:07:47.715 00:12:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:47.715 00:12:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:47.715 00:12:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:47.715 00:12:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:47.715 00:12:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:47.973 00:12:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:47.973 00:12:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:47.973 00:12:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:47.973 00:12:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:47.973 00:12:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:47.973 00:12:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:07:47.973 00:12:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:07:47.973 00:12:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:07:47.973 00:12:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:07:47.973 00:12:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:07:47.973 00:12:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:07:47.973 00:12:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:07:47.973 00:12:02 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:07:47.973 00:12:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:47.973 00:12:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:47.973 00:12:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:47.973 00:12:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:07:47.973 00:12:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:47.973 00:12:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:07:47.973 00:12:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:47.973 00:12:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:47.973 00:12:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:47.973 00:12:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:07:47.973 00:12:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:47.973 00:12:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:07:47.973 00:12:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:47.973 00:12:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:47.973 00:12:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:07:48.231 /dev/nbd0 00:07:48.231 00:12:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:48.231 00:12:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:48.231 00:12:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:07:48.231 00:12:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:07:48.231 00:12:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:07:48.231 00:12:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:07:48.231 00:12:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:07:48.231 00:12:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:07:48.231 00:12:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:07:48.231 00:12:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:07:48.231 00:12:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:48.231 1+0 records in 00:07:48.231 1+0 records out 00:07:48.231 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000780958 s, 5.2 MB/s 00:07:48.231 00:12:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:48.231 00:12:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:07:48.231 00:12:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:48.231 00:12:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:07:48.231 00:12:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:07:48.231 00:12:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:48.231 00:12:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:48.231 00:12:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 /dev/nbd1 00:07:48.231 /dev/nbd1 00:07:48.490 00:12:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:48.490 00:12:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:48.490 00:12:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:07:48.490 00:12:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:07:48.490 00:12:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:07:48.490 00:12:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:07:48.490 00:12:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:07:48.490 00:12:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:07:48.490 00:12:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:07:48.490 00:12:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:07:48.490 00:12:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:48.490 1+0 records in 00:07:48.490 1+0 records out 00:07:48.490 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000443607 s, 9.2 MB/s 00:07:48.490 00:12:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:48.490 00:12:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:07:48.490 00:12:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:48.490 00:12:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:07:48.490 00:12:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:07:48.490 00:12:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:48.490 00:12:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:48.490 00:12:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd10 00:07:48.490 /dev/nbd10 00:07:48.490 00:12:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:07:48.490 00:12:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:07:48.490 00:12:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd10 00:07:48.490 00:12:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:07:48.490 00:12:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:07:48.490 00:12:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:07:48.490 00:12:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd10 /proc/partitions 00:07:48.490 00:12:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:07:48.490 00:12:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:07:48.490 00:12:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:07:48.490 00:12:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:48.490 1+0 records in 00:07:48.490 1+0 records out 00:07:48.490 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000550958 s, 7.4 MB/s 00:07:48.490 00:12:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:48.490 00:12:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:07:48.490 00:12:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:48.490 00:12:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:07:48.490 00:12:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:07:48.490 00:12:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:48.490 00:12:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:48.491 00:12:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd11 00:07:48.749 /dev/nbd11 00:07:48.749 00:12:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:07:48.749 00:12:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:07:48.749 00:12:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd11 00:07:48.749 00:12:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:07:48.749 00:12:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:07:48.749 00:12:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:07:48.749 00:12:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd11 /proc/partitions 00:07:48.749 00:12:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:07:48.749 00:12:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:07:48.749 00:12:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:07:48.749 00:12:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:48.749 1+0 records in 00:07:48.749 1+0 records out 00:07:48.749 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000620677 s, 6.6 MB/s 00:07:48.749 00:12:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:48.749 00:12:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:07:48.749 00:12:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:48.749 00:12:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:07:48.749 00:12:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:07:48.749 00:12:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:48.749 00:12:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:48.749 00:12:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd12 00:07:49.008 /dev/nbd12 00:07:49.008 00:12:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:07:49.008 00:12:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:07:49.008 00:12:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd12 00:07:49.008 00:12:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:07:49.008 00:12:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:07:49.008 00:12:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:07:49.008 00:12:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd12 /proc/partitions 00:07:49.008 00:12:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:07:49.008 00:12:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:07:49.008 00:12:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:07:49.008 00:12:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:49.008 1+0 records in 00:07:49.008 1+0 records out 00:07:49.008 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000753121 s, 5.4 MB/s 00:07:49.008 00:12:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:49.008 00:12:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:07:49.008 00:12:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:49.008 00:12:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:07:49.008 00:12:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:07:49.008 00:12:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:49.008 00:12:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:49.008 00:12:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd13 00:07:49.267 /dev/nbd13 00:07:49.267 00:12:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:07:49.267 00:12:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:07:49.267 00:12:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd13 00:07:49.267 00:12:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:07:49.267 00:12:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:07:49.267 00:12:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:07:49.267 00:12:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd13 /proc/partitions 00:07:49.267 00:12:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:07:49.267 00:12:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:07:49.267 00:12:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:07:49.267 00:12:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:49.267 1+0 records in 00:07:49.267 1+0 records out 00:07:49.267 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000606048 s, 6.8 MB/s 00:07:49.267 00:12:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:49.267 00:12:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:07:49.267 00:12:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:49.267 00:12:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:07:49.267 00:12:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:07:49.267 00:12:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:49.267 00:12:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:49.267 00:12:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:49.267 00:12:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:49.267 00:12:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:49.526 00:12:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:49.526 { 00:07:49.526 "nbd_device": "/dev/nbd0", 00:07:49.526 "bdev_name": "Nvme0n1" 00:07:49.526 }, 00:07:49.526 { 00:07:49.526 "nbd_device": "/dev/nbd1", 00:07:49.526 "bdev_name": "Nvme1n1" 00:07:49.526 }, 00:07:49.526 { 00:07:49.526 "nbd_device": "/dev/nbd10", 00:07:49.526 "bdev_name": "Nvme2n1" 00:07:49.526 }, 00:07:49.526 { 00:07:49.526 "nbd_device": "/dev/nbd11", 00:07:49.526 "bdev_name": "Nvme2n2" 00:07:49.526 }, 00:07:49.526 { 00:07:49.526 "nbd_device": "/dev/nbd12", 00:07:49.526 "bdev_name": "Nvme2n3" 00:07:49.526 }, 00:07:49.526 { 00:07:49.526 "nbd_device": "/dev/nbd13", 00:07:49.526 "bdev_name": "Nvme3n1" 00:07:49.526 } 00:07:49.526 ]' 00:07:49.526 00:12:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:49.526 { 00:07:49.526 "nbd_device": "/dev/nbd0", 00:07:49.526 "bdev_name": "Nvme0n1" 00:07:49.526 }, 00:07:49.526 { 00:07:49.526 "nbd_device": "/dev/nbd1", 00:07:49.526 "bdev_name": "Nvme1n1" 00:07:49.526 }, 00:07:49.526 { 00:07:49.526 "nbd_device": "/dev/nbd10", 00:07:49.526 "bdev_name": "Nvme2n1" 00:07:49.526 }, 00:07:49.526 { 00:07:49.526 "nbd_device": "/dev/nbd11", 00:07:49.526 "bdev_name": "Nvme2n2" 00:07:49.526 }, 00:07:49.526 { 00:07:49.526 "nbd_device": "/dev/nbd12", 00:07:49.526 "bdev_name": "Nvme2n3" 00:07:49.526 }, 00:07:49.526 { 00:07:49.526 "nbd_device": "/dev/nbd13", 00:07:49.526 "bdev_name": "Nvme3n1" 00:07:49.526 } 00:07:49.526 ]' 00:07:49.526 00:12:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:49.526 00:12:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:49.526 /dev/nbd1 00:07:49.526 /dev/nbd10 00:07:49.526 /dev/nbd11 00:07:49.526 /dev/nbd12 00:07:49.526 /dev/nbd13' 00:07:49.526 00:12:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:49.526 /dev/nbd1 00:07:49.526 /dev/nbd10 00:07:49.526 /dev/nbd11 00:07:49.526 /dev/nbd12 00:07:49.526 /dev/nbd13' 00:07:49.526 00:12:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:49.526 00:12:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:07:49.526 00:12:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:07:49.526 00:12:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:07:49.526 00:12:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:07:49.526 00:12:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:07:49.526 00:12:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:07:49.526 00:12:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:49.526 00:12:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:49.526 00:12:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:49.526 00:12:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:49.526 00:12:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:07:49.526 256+0 records in 00:07:49.526 256+0 records out 00:07:49.526 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0123876 s, 84.6 MB/s 00:07:49.526 00:12:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:49.526 00:12:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:49.526 256+0 records in 00:07:49.526 256+0 records out 00:07:49.526 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.115037 s, 9.1 MB/s 00:07:49.526 00:12:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:49.526 00:12:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:49.785 256+0 records in 00:07:49.785 256+0 records out 00:07:49.785 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.119892 s, 8.7 MB/s 00:07:49.785 00:12:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:49.785 00:12:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:07:49.785 256+0 records in 00:07:49.785 256+0 records out 00:07:49.785 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.118976 s, 8.8 MB/s 00:07:49.785 00:12:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:49.785 00:12:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:07:50.044 256+0 records in 00:07:50.044 256+0 records out 00:07:50.044 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.113336 s, 9.3 MB/s 00:07:50.044 00:12:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:50.044 00:12:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:07:50.044 256+0 records in 00:07:50.044 256+0 records out 00:07:50.044 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.115531 s, 9.1 MB/s 00:07:50.044 00:12:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:50.044 00:12:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:07:50.303 256+0 records in 00:07:50.303 256+0 records out 00:07:50.303 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.116636 s, 9.0 MB/s 00:07:50.303 00:12:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:07:50.303 00:12:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:07:50.303 00:12:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:50.303 00:12:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:50.303 00:12:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:50.303 00:12:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:50.303 00:12:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:50.303 00:12:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:50.303 00:12:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:07:50.303 00:12:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:50.303 00:12:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:07:50.303 00:12:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:50.303 00:12:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:07:50.303 00:12:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:50.303 00:12:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:07:50.303 00:12:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:50.303 00:12:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:07:50.303 00:12:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:50.303 00:12:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:07:50.303 00:12:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:50.303 00:12:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:07:50.303 00:12:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:50.303 00:12:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:07:50.303 00:12:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:50.303 00:12:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:07:50.303 00:12:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:50.303 00:12:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:50.561 00:12:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:50.561 00:12:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:50.561 00:12:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:50.561 00:12:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:50.561 00:12:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:50.561 00:12:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:50.561 00:12:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:50.561 00:12:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:50.561 00:12:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:50.561 00:12:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:50.820 00:12:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:50.820 00:12:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:50.820 00:12:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:50.820 00:12:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:50.820 00:12:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:50.820 00:12:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:50.820 00:12:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:50.820 00:12:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:50.820 00:12:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:50.820 00:12:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:07:50.820 00:12:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:07:50.820 00:12:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:07:50.820 00:12:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:07:50.820 00:12:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:50.820 00:12:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:50.820 00:12:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:07:50.820 00:12:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:50.820 00:12:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:50.820 00:12:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:50.820 00:12:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:07:51.079 00:12:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:07:51.079 00:12:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:07:51.079 00:12:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:07:51.079 00:12:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:51.079 00:12:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:51.079 00:12:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:07:51.079 00:12:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:51.079 00:12:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:51.079 00:12:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:51.079 00:12:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:07:51.337 00:12:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:07:51.337 00:12:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:07:51.337 00:12:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:07:51.337 00:12:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:51.337 00:12:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:51.337 00:12:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:07:51.337 00:12:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:51.337 00:12:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:51.337 00:12:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:51.337 00:12:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:07:51.595 00:12:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:07:51.595 00:12:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:07:51.595 00:12:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:07:51.595 00:12:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:51.595 00:12:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:51.595 00:12:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:07:51.595 00:12:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:51.595 00:12:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:51.595 00:12:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:51.595 00:12:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:51.595 00:12:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:51.595 00:12:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:51.595 00:12:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:51.595 00:12:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:51.853 00:12:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:51.853 00:12:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:07:51.853 00:12:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:51.853 00:12:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:07:51.853 00:12:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:07:51.853 00:12:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:07:51.853 00:12:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:07:51.853 00:12:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:51.853 00:12:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:07:51.853 00:12:06 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@324 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:07:51.853 00:12:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:51.853 00:12:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:07:51.853 00:12:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd_list 00:07:51.853 00:12:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:07:51.853 00:12:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:07:51.853 malloc_lvol_verify 00:07:51.853 00:12:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:07:52.112 70167a57-96f4-4454-8217-dece3bc1d3d0 00:07:52.112 00:12:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:07:52.371 238cd6d0-aeab-4ce5-b262-5198b53e9961 00:07:52.371 00:12:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:07:52.371 /dev/nbd0 00:07:52.629 00:12:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:07:52.629 mke2fs 1.46.5 (30-Dec-2021) 00:07:52.629 Discarding device blocks: 0/4096 done 00:07:52.629 Creating filesystem with 4096 1k blocks and 1024 inodes 00:07:52.629 00:07:52.629 Allocating group tables: 0/1 done 00:07:52.629 Writing inode tables: 0/1 done 00:07:52.629 Creating journal (1024 blocks): done 00:07:52.629 Writing superblocks and filesystem accounting information: 0/1 done 00:07:52.629 00:07:52.629 00:12:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:07:52.629 00:12:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:07:52.629 00:12:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:52.629 00:12:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:07:52.629 00:12:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:52.629 00:12:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:07:52.629 00:12:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:52.629 00:12:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:52.629 00:12:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:52.629 00:12:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:52.629 00:12:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:52.629 00:12:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:52.629 00:12:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:52.629 00:12:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:52.629 00:12:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:52.629 00:12:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:52.629 00:12:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:07:52.629 00:12:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@147 -- # return 0 00:07:52.629 00:12:07 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@326 -- # killprocess 77344 00:07:52.629 00:12:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@946 -- # '[' -z 77344 ']' 00:07:52.629 00:12:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@950 -- # kill -0 77344 00:07:52.629 00:12:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@951 -- # uname 00:07:52.629 00:12:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:52.629 00:12:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 77344 00:07:52.889 killing process with pid 77344 00:07:52.889 00:12:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:52.889 00:12:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:52.889 00:12:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@964 -- # echo 'killing process with pid 77344' 00:07:52.889 00:12:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@965 -- # kill 77344 00:07:52.889 00:12:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@970 -- # wait 77344 00:07:53.179 ************************************ 00:07:53.179 END TEST bdev_nbd 00:07:53.179 ************************************ 00:07:53.179 00:12:07 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@327 -- # trap - SIGINT SIGTERM EXIT 00:07:53.179 00:07:53.179 real 0m9.106s 00:07:53.179 user 0m12.118s 00:07:53.179 sys 0m4.019s 00:07:53.179 00:12:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:53.179 00:12:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:07:53.179 00:12:07 blockdev_nvme -- bdev/blockdev.sh@763 -- # [[ y == y ]] 00:07:53.179 00:12:07 blockdev_nvme -- bdev/blockdev.sh@764 -- # '[' nvme = nvme ']' 00:07:53.179 skipping fio tests on NVMe due to multi-ns failures. 00:07:53.179 00:12:07 blockdev_nvme -- bdev/blockdev.sh@766 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:07:53.179 00:12:07 blockdev_nvme -- bdev/blockdev.sh@775 -- # trap cleanup SIGINT SIGTERM EXIT 00:07:53.179 00:12:07 blockdev_nvme -- bdev/blockdev.sh@777 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:07:53.179 00:12:07 blockdev_nvme -- common/autotest_common.sh@1097 -- # '[' 16 -le 1 ']' 00:07:53.179 00:12:07 blockdev_nvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:53.179 00:12:07 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:53.179 ************************************ 00:07:53.179 START TEST bdev_verify 00:07:53.179 ************************************ 00:07:53.179 00:12:07 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:07:53.179 [2024-07-23 00:12:07.738329] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:07:53.179 [2024-07-23 00:12:07.738450] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77707 ] 00:07:53.472 [2024-07-23 00:12:07.889575] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:53.472 [2024-07-23 00:12:07.932796] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:53.472 [2024-07-23 00:12:07.932888] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:53.731 Running I/O for 5 seconds... 00:07:58.998 00:07:58.998 Latency(us) 00:07:58.998 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:58.998 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:58.998 Verification LBA range: start 0x0 length 0xbd0bd 00:07:58.998 Nvme0n1 : 5.05 1798.32 7.02 0.00 0.00 71021.72 12317.61 74116.22 00:07:58.998 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:58.998 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:07:58.998 Nvme0n1 : 5.06 1772.26 6.92 0.00 0.00 71611.53 17581.55 65272.80 00:07:58.998 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:58.998 Verification LBA range: start 0x0 length 0xa0000 00:07:58.998 Nvme1n1 : 5.06 1797.71 7.02 0.00 0.00 70912.46 12896.64 67378.38 00:07:58.998 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:58.998 Verification LBA range: start 0xa0000 length 0xa0000 00:07:58.998 Nvme1n1 : 5.06 1782.48 6.96 0.00 0.00 71104.28 3342.60 66957.26 00:07:58.998 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:58.998 Verification LBA range: start 0x0 length 0x80000 00:07:58.998 Nvme2n1 : 5.06 1797.15 7.02 0.00 0.00 70773.45 12896.64 66115.03 00:07:58.998 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:58.998 Verification LBA range: start 0x80000 length 0x80000 00:07:58.998 Nvme2n1 : 5.06 1782.01 6.96 0.00 0.00 71015.12 3526.84 68641.72 00:07:58.998 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:58.998 Verification LBA range: start 0x0 length 0x80000 00:07:58.998 Nvme2n2 : 5.06 1796.70 7.02 0.00 0.00 70652.02 12633.45 66536.15 00:07:58.998 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:58.998 Verification LBA range: start 0x80000 length 0x80000 00:07:58.998 Nvme2n2 : 5.05 1773.71 6.93 0.00 0.00 71989.83 17265.71 73273.99 00:07:58.998 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:58.998 Verification LBA range: start 0x0 length 0x80000 00:07:58.998 Nvme2n3 : 5.06 1796.25 7.02 0.00 0.00 70532.85 12738.72 64851.69 00:07:58.998 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:58.998 Verification LBA range: start 0x80000 length 0x80000 00:07:58.998 Nvme2n3 : 5.05 1773.26 6.93 0.00 0.00 71878.91 18529.05 66957.26 00:07:58.998 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:58.998 Verification LBA range: start 0x0 length 0x20000 00:07:58.998 Nvme3n1 : 5.06 1807.00 7.06 0.00 0.00 70010.37 2381.93 64009.46 00:07:58.998 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:58.998 Verification LBA range: start 0x20000 length 0x20000 00:07:58.998 Nvme3n1 : 5.05 1772.84 6.93 0.00 0.00 71732.79 18002.66 63588.34 00:07:58.998 =================================================================================================================== 00:07:58.998 Total : 21449.69 83.79 0.00 0.00 71099.05 2381.93 74116.22 00:07:59.257 00:07:59.257 real 0m6.274s 00:07:59.257 user 0m11.714s 00:07:59.257 sys 0m0.278s 00:07:59.257 ************************************ 00:07:59.257 END TEST bdev_verify 00:07:59.257 ************************************ 00:07:59.257 00:12:13 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:59.257 00:12:13 blockdev_nvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:07:59.515 00:12:13 blockdev_nvme -- bdev/blockdev.sh@778 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:07:59.515 00:12:13 blockdev_nvme -- common/autotest_common.sh@1097 -- # '[' 16 -le 1 ']' 00:07:59.515 00:12:13 blockdev_nvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:59.515 00:12:13 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:59.515 ************************************ 00:07:59.515 START TEST bdev_verify_big_io 00:07:59.515 ************************************ 00:07:59.515 00:12:14 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:07:59.515 [2024-07-23 00:12:14.086672] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:07:59.515 [2024-07-23 00:12:14.086799] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77794 ] 00:07:59.774 [2024-07-23 00:12:14.238377] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:59.774 [2024-07-23 00:12:14.281925] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:59.774 [2024-07-23 00:12:14.282047] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:00.341 Running I/O for 5 seconds... 00:08:06.904 00:08:06.904 Latency(us) 00:08:06.904 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:06.904 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:06.904 Verification LBA range: start 0x0 length 0xbd0b 00:08:06.904 Nvme0n1 : 5.49 165.52 10.34 0.00 0.00 737671.05 28214.70 758006.75 00:08:06.904 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:06.904 Verification LBA range: start 0xbd0b length 0xbd0b 00:08:06.904 Nvme0n1 : 5.51 165.48 10.34 0.00 0.00 746448.89 22424.37 774851.34 00:08:06.904 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:06.904 Verification LBA range: start 0x0 length 0xa000 00:08:06.904 Nvme1n1 : 5.53 173.63 10.85 0.00 0.00 703531.37 34110.30 650201.34 00:08:06.904 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:06.904 Verification LBA range: start 0xa000 length 0xa000 00:08:06.904 Nvme1n1 : 5.51 166.51 10.41 0.00 0.00 726005.92 79169.59 640094.59 00:08:06.904 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:06.904 Verification LBA range: start 0x0 length 0x8000 00:08:06.904 Nvme2n1 : 5.61 178.06 11.13 0.00 0.00 671114.29 42322.04 660308.10 00:08:06.904 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:06.904 Verification LBA range: start 0x8000 length 0x8000 00:08:06.904 Nvme2n1 : 5.57 172.43 10.78 0.00 0.00 691493.08 50954.90 653570.26 00:08:06.904 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:06.904 Verification LBA range: start 0x0 length 0x8000 00:08:06.904 Nvme2n2 : 5.61 178.02 11.13 0.00 0.00 655167.57 41058.70 677152.69 00:08:06.904 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:06.904 Verification LBA range: start 0x8000 length 0x8000 00:08:06.904 Nvme2n2 : 5.62 178.38 11.15 0.00 0.00 655497.43 35794.76 670414.86 00:08:06.904 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:06.904 Verification LBA range: start 0x0 length 0x8000 00:08:06.904 Nvme2n3 : 5.64 185.52 11.59 0.00 0.00 616446.09 29478.04 687259.45 00:08:06.904 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:06.904 Verification LBA range: start 0x8000 length 0x8000 00:08:06.904 Nvme2n3 : 5.62 182.23 11.39 0.00 0.00 628186.55 11685.94 680521.61 00:08:06.904 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:06.904 Verification LBA range: start 0x0 length 0x2000 00:08:06.904 Nvme3n1 : 5.67 203.01 12.69 0.00 0.00 553014.62 565.87 704104.04 00:08:06.904 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:06.904 Verification LBA range: start 0x2000 length 0x2000 00:08:06.904 Nvme3n1 : 5.68 200.24 12.52 0.00 0.00 559262.15 829.07 1078054.04 00:08:06.904 =================================================================================================================== 00:08:06.904 Total : 2149.03 134.31 0.00 0.00 657326.01 565.87 1078054.04 00:08:06.904 00:08:06.904 real 0m7.157s 00:08:06.904 user 0m13.455s 00:08:06.904 sys 0m0.289s 00:08:06.904 00:12:21 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:06.904 00:12:21 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:08:06.904 ************************************ 00:08:06.904 END TEST bdev_verify_big_io 00:08:06.904 ************************************ 00:08:06.904 00:12:21 blockdev_nvme -- bdev/blockdev.sh@779 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:06.904 00:12:21 blockdev_nvme -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:08:06.904 00:12:21 blockdev_nvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:06.904 00:12:21 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:06.904 ************************************ 00:08:06.904 START TEST bdev_write_zeroes 00:08:06.904 ************************************ 00:08:06.904 00:12:21 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:06.904 [2024-07-23 00:12:21.323743] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:08:06.904 [2024-07-23 00:12:21.323926] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77892 ] 00:08:06.904 [2024-07-23 00:12:21.478785] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:06.904 [2024-07-23 00:12:21.520458] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:07.471 Running I/O for 1 seconds... 00:08:08.406 00:08:08.406 Latency(us) 00:08:08.406 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:08.406 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:08.406 Nvme0n1 : 1.01 12565.59 49.08 0.00 0.00 10161.22 5079.70 60219.42 00:08:08.406 Job: Nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:08.406 Nvme1n1 : 1.01 12601.45 49.22 0.00 0.00 10119.65 7632.71 51797.13 00:08:08.406 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:08.406 Nvme2n1 : 1.01 12588.87 49.18 0.00 0.00 10106.61 7790.62 51376.01 00:08:08.406 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:08.406 Nvme2n2 : 1.01 12618.15 49.29 0.00 0.00 10057.91 6500.96 46112.08 00:08:08.406 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:08.406 Nvme2n3 : 1.02 12639.55 49.37 0.00 0.00 10026.94 4369.07 50533.78 00:08:08.406 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:08.406 Nvme3n1 : 1.02 12669.70 49.49 0.00 0.00 9950.95 4237.47 50954.90 00:08:08.406 =================================================================================================================== 00:08:08.406 Total : 75683.32 295.64 0.00 0.00 10070.11 4237.47 60219.42 00:08:08.674 00:08:08.674 real 0m1.934s 00:08:08.674 user 0m1.597s 00:08:08.674 sys 0m0.225s 00:08:08.674 00:12:23 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:08.674 ************************************ 00:08:08.674 END TEST bdev_write_zeroes 00:08:08.674 00:12:23 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:08:08.674 ************************************ 00:08:08.674 00:12:23 blockdev_nvme -- bdev/blockdev.sh@782 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:08.674 00:12:23 blockdev_nvme -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:08:08.674 00:12:23 blockdev_nvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:08.674 00:12:23 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:08.674 ************************************ 00:08:08.674 START TEST bdev_json_nonenclosed 00:08:08.674 ************************************ 00:08:08.675 00:12:23 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:08.675 [2024-07-23 00:12:23.315170] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:08:08.675 [2024-07-23 00:12:23.315322] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77933 ] 00:08:08.934 [2024-07-23 00:12:23.465334] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:08.934 [2024-07-23 00:12:23.507501] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:08.934 [2024-07-23 00:12:23.507602] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:08:08.934 [2024-07-23 00:12:23.507639] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:08:08.934 [2024-07-23 00:12:23.507658] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:08.934 00:08:08.934 real 0m0.379s 00:08:08.934 user 0m0.147s 00:08:08.934 sys 0m0.128s 00:08:08.934 00:12:23 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:08.934 ************************************ 00:08:08.934 END TEST bdev_json_nonenclosed 00:08:08.934 ************************************ 00:08:08.934 00:12:23 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:08:09.193 00:12:23 blockdev_nvme -- bdev/blockdev.sh@785 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:09.193 00:12:23 blockdev_nvme -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:08:09.193 00:12:23 blockdev_nvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:09.193 00:12:23 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:09.193 ************************************ 00:08:09.193 START TEST bdev_json_nonarray 00:08:09.193 ************************************ 00:08:09.193 00:12:23 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:09.193 [2024-07-23 00:12:23.767663] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:08:09.193 [2024-07-23 00:12:23.767795] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77954 ] 00:08:09.452 [2024-07-23 00:12:23.916872] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:09.452 [2024-07-23 00:12:23.958559] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:09.452 [2024-07-23 00:12:23.958664] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:08:09.452 [2024-07-23 00:12:23.958688] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:08:09.452 [2024-07-23 00:12:23.958700] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:09.452 00:08:09.452 real 0m0.379s 00:08:09.452 user 0m0.155s 00:08:09.452 sys 0m0.120s 00:08:09.452 00:12:24 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:09.452 ************************************ 00:08:09.452 END TEST bdev_json_nonarray 00:08:09.452 ************************************ 00:08:09.452 00:12:24 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:08:09.452 00:12:24 blockdev_nvme -- bdev/blockdev.sh@787 -- # [[ nvme == bdev ]] 00:08:09.452 00:12:24 blockdev_nvme -- bdev/blockdev.sh@794 -- # [[ nvme == gpt ]] 00:08:09.452 00:12:24 blockdev_nvme -- bdev/blockdev.sh@798 -- # [[ nvme == crypto_sw ]] 00:08:09.452 00:12:24 blockdev_nvme -- bdev/blockdev.sh@810 -- # trap - SIGINT SIGTERM EXIT 00:08:09.452 00:12:24 blockdev_nvme -- bdev/blockdev.sh@811 -- # cleanup 00:08:09.452 00:12:24 blockdev_nvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:08:09.453 00:12:24 blockdev_nvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:08:09.453 00:12:24 blockdev_nvme -- bdev/blockdev.sh@26 -- # [[ nvme == rbd ]] 00:08:09.453 00:12:24 blockdev_nvme -- bdev/blockdev.sh@30 -- # [[ nvme == daos ]] 00:08:09.453 00:12:24 blockdev_nvme -- bdev/blockdev.sh@34 -- # [[ nvme = \g\p\t ]] 00:08:09.453 00:12:24 blockdev_nvme -- bdev/blockdev.sh@40 -- # [[ nvme == xnvme ]] 00:08:09.453 00:08:09.453 real 0m30.247s 00:08:09.453 user 0m45.562s 00:08:09.453 sys 0m6.684s 00:08:09.453 00:12:24 blockdev_nvme -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:09.453 ************************************ 00:08:09.453 END TEST blockdev_nvme 00:08:09.453 ************************************ 00:08:09.453 00:12:24 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:09.712 00:12:24 -- spdk/autotest.sh@213 -- # uname -s 00:08:09.712 00:12:24 -- spdk/autotest.sh@213 -- # [[ Linux == Linux ]] 00:08:09.712 00:12:24 -- spdk/autotest.sh@214 -- # run_test blockdev_nvme_gpt /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:08:09.712 00:12:24 -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:08:09.712 00:12:24 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:09.712 00:12:24 -- common/autotest_common.sh@10 -- # set +x 00:08:09.712 ************************************ 00:08:09.712 START TEST blockdev_nvme_gpt 00:08:09.712 ************************************ 00:08:09.712 00:12:24 blockdev_nvme_gpt -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:08:09.712 * Looking for test storage... 00:08:09.712 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:08:09.712 00:12:24 blockdev_nvme_gpt -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:08:09.712 00:12:24 blockdev_nvme_gpt -- bdev/nbd_common.sh@6 -- # set -e 00:08:09.712 00:12:24 blockdev_nvme_gpt -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:08:09.712 00:12:24 blockdev_nvme_gpt -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:08:09.712 00:12:24 blockdev_nvme_gpt -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:08:09.712 00:12:24 blockdev_nvme_gpt -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:08:09.712 00:12:24 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:08:09.712 00:12:24 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:08:09.712 00:12:24 blockdev_nvme_gpt -- bdev/blockdev.sh@20 -- # : 00:08:09.712 00:12:24 blockdev_nvme_gpt -- bdev/blockdev.sh@670 -- # QOS_DEV_1=Malloc_0 00:08:09.712 00:12:24 blockdev_nvme_gpt -- bdev/blockdev.sh@671 -- # QOS_DEV_2=Null_1 00:08:09.712 00:12:24 blockdev_nvme_gpt -- bdev/blockdev.sh@672 -- # QOS_RUN_TIME=5 00:08:09.712 00:12:24 blockdev_nvme_gpt -- bdev/blockdev.sh@674 -- # uname -s 00:08:09.712 00:12:24 blockdev_nvme_gpt -- bdev/blockdev.sh@674 -- # '[' Linux = Linux ']' 00:08:09.712 00:12:24 blockdev_nvme_gpt -- bdev/blockdev.sh@676 -- # PRE_RESERVED_MEM=0 00:08:09.712 00:12:24 blockdev_nvme_gpt -- bdev/blockdev.sh@682 -- # test_type=gpt 00:08:09.712 00:12:24 blockdev_nvme_gpt -- bdev/blockdev.sh@683 -- # crypto_device= 00:08:09.712 00:12:24 blockdev_nvme_gpt -- bdev/blockdev.sh@684 -- # dek= 00:08:09.712 00:12:24 blockdev_nvme_gpt -- bdev/blockdev.sh@685 -- # env_ctx= 00:08:09.712 00:12:24 blockdev_nvme_gpt -- bdev/blockdev.sh@686 -- # wait_for_rpc= 00:08:09.712 00:12:24 blockdev_nvme_gpt -- bdev/blockdev.sh@687 -- # '[' -n '' ']' 00:08:09.712 00:12:24 blockdev_nvme_gpt -- bdev/blockdev.sh@690 -- # [[ gpt == bdev ]] 00:08:09.712 00:12:24 blockdev_nvme_gpt -- bdev/blockdev.sh@690 -- # [[ gpt == crypto_* ]] 00:08:09.712 00:12:24 blockdev_nvme_gpt -- bdev/blockdev.sh@693 -- # start_spdk_tgt 00:08:09.712 00:12:24 blockdev_nvme_gpt -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=78030 00:08:09.712 00:12:24 blockdev_nvme_gpt -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:08:09.712 00:12:24 blockdev_nvme_gpt -- bdev/blockdev.sh@49 -- # waitforlisten 78030 00:08:09.712 00:12:24 blockdev_nvme_gpt -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:08:09.712 00:12:24 blockdev_nvme_gpt -- common/autotest_common.sh@827 -- # '[' -z 78030 ']' 00:08:09.712 00:12:24 blockdev_nvme_gpt -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:09.712 00:12:24 blockdev_nvme_gpt -- common/autotest_common.sh@832 -- # local max_retries=100 00:08:09.712 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:09.713 00:12:24 blockdev_nvme_gpt -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:09.713 00:12:24 blockdev_nvme_gpt -- common/autotest_common.sh@836 -- # xtrace_disable 00:08:09.713 00:12:24 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:09.972 [2024-07-23 00:12:24.448249] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:08:09.972 [2024-07-23 00:12:24.448402] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78030 ] 00:08:09.972 [2024-07-23 00:12:24.600792] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:09.972 [2024-07-23 00:12:24.641310] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:10.909 00:12:25 blockdev_nvme_gpt -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:08:10.909 00:12:25 blockdev_nvme_gpt -- common/autotest_common.sh@860 -- # return 0 00:08:10.909 00:12:25 blockdev_nvme_gpt -- bdev/blockdev.sh@694 -- # case "$test_type" in 00:08:10.909 00:12:25 blockdev_nvme_gpt -- bdev/blockdev.sh@702 -- # setup_gpt_conf 00:08:10.909 00:12:25 blockdev_nvme_gpt -- bdev/blockdev.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:08:11.169 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:11.428 Waiting for block devices as requested 00:08:11.428 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:08:11.687 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:08:11.687 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:08:11.687 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:08:16.959 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:08:16.959 00:12:31 blockdev_nvme_gpt -- bdev/blockdev.sh@105 -- # get_zoned_devs 00:08:16.959 00:12:31 blockdev_nvme_gpt -- common/autotest_common.sh@1665 -- # zoned_devs=() 00:08:16.959 00:12:31 blockdev_nvme_gpt -- common/autotest_common.sh@1665 -- # local -gA zoned_devs 00:08:16.959 00:12:31 blockdev_nvme_gpt -- common/autotest_common.sh@1666 -- # local nvme bdf 00:08:16.959 00:12:31 blockdev_nvme_gpt -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:08:16.959 00:12:31 blockdev_nvme_gpt -- common/autotest_common.sh@1669 -- # is_block_zoned nvme0n1 00:08:16.959 00:12:31 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:08:16.959 00:12:31 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:08:16.959 00:12:31 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:08:16.959 00:12:31 blockdev_nvme_gpt -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:08:16.959 00:12:31 blockdev_nvme_gpt -- common/autotest_common.sh@1669 -- # is_block_zoned nvme1n1 00:08:16.959 00:12:31 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # local device=nvme1n1 00:08:16.959 00:12:31 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:08:16.959 00:12:31 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:08:16.959 00:12:31 blockdev_nvme_gpt -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:08:16.959 00:12:31 blockdev_nvme_gpt -- common/autotest_common.sh@1669 -- # is_block_zoned nvme2n1 00:08:16.959 00:12:31 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # local device=nvme2n1 00:08:16.959 00:12:31 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:08:16.959 00:12:31 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:08:16.959 00:12:31 blockdev_nvme_gpt -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:08:16.959 00:12:31 blockdev_nvme_gpt -- common/autotest_common.sh@1669 -- # is_block_zoned nvme2n2 00:08:16.959 00:12:31 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # local device=nvme2n2 00:08:16.959 00:12:31 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:08:16.959 00:12:31 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:08:16.959 00:12:31 blockdev_nvme_gpt -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:08:16.959 00:12:31 blockdev_nvme_gpt -- common/autotest_common.sh@1669 -- # is_block_zoned nvme2n3 00:08:16.959 00:12:31 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # local device=nvme2n3 00:08:16.959 00:12:31 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:08:16.959 00:12:31 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:08:16.959 00:12:31 blockdev_nvme_gpt -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:08:16.959 00:12:31 blockdev_nvme_gpt -- common/autotest_common.sh@1669 -- # is_block_zoned nvme3c3n1 00:08:16.959 00:12:31 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # local device=nvme3c3n1 00:08:16.959 00:12:31 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:08:16.959 00:12:31 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:08:16.959 00:12:31 blockdev_nvme_gpt -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:08:16.959 00:12:31 blockdev_nvme_gpt -- common/autotest_common.sh@1669 -- # is_block_zoned nvme3n1 00:08:16.959 00:12:31 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # local device=nvme3n1 00:08:16.959 00:12:31 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:08:16.959 00:12:31 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:08:16.959 00:12:31 blockdev_nvme_gpt -- bdev/blockdev.sh@107 -- # nvme_devs=('/sys/bus/pci/drivers/nvme/0000:00:10.0/nvme/nvme1/nvme1n1' '/sys/bus/pci/drivers/nvme/0000:00:11.0/nvme/nvme0/nvme0n1' '/sys/bus/pci/drivers/nvme/0000:00:12.0/nvme/nvme2/nvme2n1' '/sys/bus/pci/drivers/nvme/0000:00:12.0/nvme/nvme2/nvme2n2' '/sys/bus/pci/drivers/nvme/0000:00:12.0/nvme/nvme2/nvme2n3' '/sys/bus/pci/drivers/nvme/0000:00:13.0/nvme/nvme3/nvme3c3n1') 00:08:16.959 00:12:31 blockdev_nvme_gpt -- bdev/blockdev.sh@107 -- # local nvme_devs nvme_dev 00:08:16.959 00:12:31 blockdev_nvme_gpt -- bdev/blockdev.sh@108 -- # gpt_nvme= 00:08:16.960 00:12:31 blockdev_nvme_gpt -- bdev/blockdev.sh@110 -- # for nvme_dev in "${nvme_devs[@]}" 00:08:16.960 00:12:31 blockdev_nvme_gpt -- bdev/blockdev.sh@111 -- # [[ -z '' ]] 00:08:16.960 00:12:31 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # dev=/dev/nvme1n1 00:08:16.960 00:12:31 blockdev_nvme_gpt -- bdev/blockdev.sh@113 -- # parted /dev/nvme1n1 -ms print 00:08:16.960 00:12:31 blockdev_nvme_gpt -- bdev/blockdev.sh@113 -- # pt='Error: /dev/nvme1n1: unrecognised disk label 00:08:16.960 BYT; 00:08:16.960 /dev/nvme1n1:6343MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:;' 00:08:16.960 00:12:31 blockdev_nvme_gpt -- bdev/blockdev.sh@114 -- # [[ Error: /dev/nvme1n1: unrecognised disk label 00:08:16.960 BYT; 00:08:16.960 /dev/nvme1n1:6343MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:; == *\/\d\e\v\/\n\v\m\e\1\n\1\:\ \u\n\r\e\c\o\g\n\i\s\e\d\ \d\i\s\k\ \l\a\b\e\l* ]] 00:08:16.960 00:12:31 blockdev_nvme_gpt -- bdev/blockdev.sh@115 -- # gpt_nvme=/dev/nvme1n1 00:08:16.960 00:12:31 blockdev_nvme_gpt -- bdev/blockdev.sh@116 -- # break 00:08:16.960 00:12:31 blockdev_nvme_gpt -- bdev/blockdev.sh@119 -- # [[ -n /dev/nvme1n1 ]] 00:08:16.960 00:12:31 blockdev_nvme_gpt -- bdev/blockdev.sh@124 -- # typeset -g g_unique_partguid=6f89f330-603b-4116-ac73-2ca8eae53030 00:08:16.960 00:12:31 blockdev_nvme_gpt -- bdev/blockdev.sh@125 -- # typeset -g g_unique_partguid_old=abf1734f-66e5-4c0f-aa29-4021d4d307df 00:08:16.960 00:12:31 blockdev_nvme_gpt -- bdev/blockdev.sh@128 -- # parted -s /dev/nvme1n1 mklabel gpt mkpart SPDK_TEST_first 0% 50% mkpart SPDK_TEST_second 50% 100% 00:08:16.960 00:12:31 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # get_spdk_gpt_old 00:08:16.960 00:12:31 blockdev_nvme_gpt -- scripts/common.sh@408 -- # local spdk_guid 00:08:16.960 00:12:31 blockdev_nvme_gpt -- scripts/common.sh@410 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:08:16.960 00:12:31 blockdev_nvme_gpt -- scripts/common.sh@412 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:08:16.960 00:12:31 blockdev_nvme_gpt -- scripts/common.sh@413 -- # IFS='()' 00:08:16.960 00:12:31 blockdev_nvme_gpt -- scripts/common.sh@413 -- # read -r _ spdk_guid _ 00:08:16.960 00:12:31 blockdev_nvme_gpt -- scripts/common.sh@413 -- # grep -w SPDK_GPT_PART_TYPE_GUID_OLD /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:08:16.960 00:12:31 blockdev_nvme_gpt -- scripts/common.sh@414 -- # spdk_guid=0x7c5222bd-0x8f5d-0x4087-0x9c00-0xbf9843c7b58c 00:08:16.960 00:12:31 blockdev_nvme_gpt -- scripts/common.sh@414 -- # spdk_guid=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:08:16.960 00:12:31 blockdev_nvme_gpt -- scripts/common.sh@416 -- # echo 7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:08:16.960 00:12:31 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # SPDK_GPT_OLD_GUID=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:08:16.960 00:12:31 blockdev_nvme_gpt -- bdev/blockdev.sh@131 -- # get_spdk_gpt 00:08:16.960 00:12:31 blockdev_nvme_gpt -- scripts/common.sh@420 -- # local spdk_guid 00:08:16.960 00:12:31 blockdev_nvme_gpt -- scripts/common.sh@422 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:08:16.960 00:12:31 blockdev_nvme_gpt -- scripts/common.sh@424 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:08:16.960 00:12:31 blockdev_nvme_gpt -- scripts/common.sh@425 -- # IFS='()' 00:08:16.960 00:12:31 blockdev_nvme_gpt -- scripts/common.sh@425 -- # read -r _ spdk_guid _ 00:08:16.960 00:12:31 blockdev_nvme_gpt -- scripts/common.sh@425 -- # grep -w SPDK_GPT_PART_TYPE_GUID /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:08:16.960 00:12:31 blockdev_nvme_gpt -- scripts/common.sh@426 -- # spdk_guid=0x6527994e-0x2c5a-0x4eec-0x9613-0x8f5944074e8b 00:08:16.960 00:12:31 blockdev_nvme_gpt -- scripts/common.sh@426 -- # spdk_guid=6527994e-2c5a-4eec-9613-8f5944074e8b 00:08:16.960 00:12:31 blockdev_nvme_gpt -- scripts/common.sh@428 -- # echo 6527994e-2c5a-4eec-9613-8f5944074e8b 00:08:16.960 00:12:31 blockdev_nvme_gpt -- bdev/blockdev.sh@131 -- # SPDK_GPT_GUID=6527994e-2c5a-4eec-9613-8f5944074e8b 00:08:16.960 00:12:31 blockdev_nvme_gpt -- bdev/blockdev.sh@132 -- # sgdisk -t 1:6527994e-2c5a-4eec-9613-8f5944074e8b -u 1:6f89f330-603b-4116-ac73-2ca8eae53030 /dev/nvme1n1 00:08:17.922 The operation has completed successfully. 00:08:17.922 00:12:32 blockdev_nvme_gpt -- bdev/blockdev.sh@133 -- # sgdisk -t 2:7c5222bd-8f5d-4087-9c00-bf9843c7b58c -u 2:abf1734f-66e5-4c0f-aa29-4021d4d307df /dev/nvme1n1 00:08:19.313 The operation has completed successfully. 00:08:19.313 00:12:33 blockdev_nvme_gpt -- bdev/blockdev.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:08:19.572 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:20.509 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:08:20.509 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:08:20.509 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:08:20.509 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:08:20.509 00:12:35 blockdev_nvme_gpt -- bdev/blockdev.sh@135 -- # rpc_cmd bdev_get_bdevs 00:08:20.509 00:12:35 blockdev_nvme_gpt -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:20.509 00:12:35 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:20.509 [] 00:08:20.509 00:12:35 blockdev_nvme_gpt -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:20.509 00:12:35 blockdev_nvme_gpt -- bdev/blockdev.sh@136 -- # setup_nvme_conf 00:08:20.509 00:12:35 blockdev_nvme_gpt -- bdev/blockdev.sh@81 -- # local json 00:08:20.509 00:12:35 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # mapfile -t json 00:08:20.509 00:12:35 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:08:20.768 00:12:35 blockdev_nvme_gpt -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:08:20.768 00:12:35 blockdev_nvme_gpt -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:20.768 00:12:35 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:21.027 00:12:35 blockdev_nvme_gpt -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:21.027 00:12:35 blockdev_nvme_gpt -- bdev/blockdev.sh@737 -- # rpc_cmd bdev_wait_for_examine 00:08:21.027 00:12:35 blockdev_nvme_gpt -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:21.027 00:12:35 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:21.027 00:12:35 blockdev_nvme_gpt -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:21.027 00:12:35 blockdev_nvme_gpt -- bdev/blockdev.sh@740 -- # cat 00:08:21.027 00:12:35 blockdev_nvme_gpt -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n accel 00:08:21.027 00:12:35 blockdev_nvme_gpt -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:21.027 00:12:35 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:21.027 00:12:35 blockdev_nvme_gpt -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:21.027 00:12:35 blockdev_nvme_gpt -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n bdev 00:08:21.027 00:12:35 blockdev_nvme_gpt -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:21.027 00:12:35 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:21.027 00:12:35 blockdev_nvme_gpt -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:21.028 00:12:35 blockdev_nvme_gpt -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n iobuf 00:08:21.028 00:12:35 blockdev_nvme_gpt -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:21.028 00:12:35 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:21.028 00:12:35 blockdev_nvme_gpt -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:21.028 00:12:35 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # mapfile -t bdevs 00:08:21.028 00:12:35 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # rpc_cmd bdev_get_bdevs 00:08:21.028 00:12:35 blockdev_nvme_gpt -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:21.028 00:12:35 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:21.028 00:12:35 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # jq -r '.[] | select(.claimed == false)' 00:08:21.028 00:12:35 blockdev_nvme_gpt -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:21.288 00:12:35 blockdev_nvme_gpt -- bdev/blockdev.sh@749 -- # mapfile -t bdevs_name 00:08:21.289 00:12:35 blockdev_nvme_gpt -- bdev/blockdev.sh@749 -- # printf '%s\n' '{' ' "name": "Nvme0n1p1",' ' "aliases": [' ' "6f89f330-603b-4116-ac73-2ca8eae53030"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 774144,' ' "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme0n1",' ' "offset_blocks": 256,' ' "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b",' ' "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "partition_name": "SPDK_TEST_first"' ' }' ' }' '}' '{' ' "name": "Nvme0n1p2",' ' "aliases": [' ' "abf1734f-66e5-4c0f-aa29-4021d4d307df"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 774143,' ' "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme0n1",' ' "offset_blocks": 774400,' ' "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c",' ' "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "partition_name": "SPDK_TEST_second"' ' }' ' }' '}' '{' ' "name": "Nvme1n1",' ' "aliases": [' ' "9f4c1125-ffc3-4a4b-a9f1-f026f4787aaa"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "9f4c1125-ffc3-4a4b-a9f1-f026f4787aaa",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:11.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:11.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12341",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12341",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "aec672d4-e3ac-4099-9be0-112f8b0b4bf2"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "aec672d4-e3ac-4099-9be0-112f8b0b4bf2",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "366d3edc-c805-4f45-bcd3-b4f605a1961a"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "366d3edc-c805-4f45-bcd3-b4f605a1961a",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "f1174048-3ffe-46d2-bad5-7894186ec175"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "f1174048-3ffe-46d2-bad5-7894186ec175",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "a3357976-abe1-4472-a1ca-64b00e516f75"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "a3357976-abe1-4472-a1ca-64b00e516f75",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:08:21.289 00:12:35 blockdev_nvme_gpt -- bdev/blockdev.sh@749 -- # jq -r .name 00:08:21.289 00:12:35 blockdev_nvme_gpt -- bdev/blockdev.sh@750 -- # bdev_list=("${bdevs_name[@]}") 00:08:21.289 00:12:35 blockdev_nvme_gpt -- bdev/blockdev.sh@752 -- # hello_world_bdev=Nvme0n1p1 00:08:21.289 00:12:35 blockdev_nvme_gpt -- bdev/blockdev.sh@753 -- # trap - SIGINT SIGTERM EXIT 00:08:21.289 00:12:35 blockdev_nvme_gpt -- bdev/blockdev.sh@754 -- # killprocess 78030 00:08:21.289 00:12:35 blockdev_nvme_gpt -- common/autotest_common.sh@946 -- # '[' -z 78030 ']' 00:08:21.289 00:12:35 blockdev_nvme_gpt -- common/autotest_common.sh@950 -- # kill -0 78030 00:08:21.289 00:12:35 blockdev_nvme_gpt -- common/autotest_common.sh@951 -- # uname 00:08:21.289 00:12:35 blockdev_nvme_gpt -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:08:21.289 00:12:35 blockdev_nvme_gpt -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 78030 00:08:21.289 killing process with pid 78030 00:08:21.289 00:12:35 blockdev_nvme_gpt -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:08:21.289 00:12:35 blockdev_nvme_gpt -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:08:21.289 00:12:35 blockdev_nvme_gpt -- common/autotest_common.sh@964 -- # echo 'killing process with pid 78030' 00:08:21.289 00:12:35 blockdev_nvme_gpt -- common/autotest_common.sh@965 -- # kill 78030 00:08:21.289 00:12:35 blockdev_nvme_gpt -- common/autotest_common.sh@970 -- # wait 78030 00:08:21.548 00:12:36 blockdev_nvme_gpt -- bdev/blockdev.sh@758 -- # trap cleanup SIGINT SIGTERM EXIT 00:08:21.548 00:12:36 blockdev_nvme_gpt -- bdev/blockdev.sh@760 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1p1 '' 00:08:21.548 00:12:36 blockdev_nvme_gpt -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:08:21.548 00:12:36 blockdev_nvme_gpt -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:21.548 00:12:36 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:21.548 ************************************ 00:08:21.548 START TEST bdev_hello_world 00:08:21.548 ************************************ 00:08:21.548 00:12:36 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1p1 '' 00:08:21.807 [2024-07-23 00:12:36.261945] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:08:21.807 [2024-07-23 00:12:36.262067] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78644 ] 00:08:21.807 [2024-07-23 00:12:36.409662] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:21.807 [2024-07-23 00:12:36.451341] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:22.376 [2024-07-23 00:12:36.826013] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:08:22.376 [2024-07-23 00:12:36.826065] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1p1 00:08:22.376 [2024-07-23 00:12:36.826086] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:08:22.376 [2024-07-23 00:12:36.828316] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:08:22.376 [2024-07-23 00:12:36.828874] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:08:22.376 [2024-07-23 00:12:36.828910] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:08:22.376 [2024-07-23 00:12:36.829295] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:08:22.376 00:08:22.376 [2024-07-23 00:12:36.829336] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:08:22.635 00:08:22.635 real 0m0.875s 00:08:22.635 user 0m0.563s 00:08:22.635 sys 0m0.208s 00:08:22.635 00:12:37 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:22.635 ************************************ 00:08:22.635 END TEST bdev_hello_world 00:08:22.635 ************************************ 00:08:22.635 00:12:37 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:08:22.635 00:12:37 blockdev_nvme_gpt -- bdev/blockdev.sh@761 -- # run_test bdev_bounds bdev_bounds '' 00:08:22.635 00:12:37 blockdev_nvme_gpt -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:08:22.635 00:12:37 blockdev_nvme_gpt -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:22.635 00:12:37 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:22.635 ************************************ 00:08:22.635 START TEST bdev_bounds 00:08:22.635 ************************************ 00:08:22.635 00:12:37 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1121 -- # bdev_bounds '' 00:08:22.635 Process bdevio pid: 78675 00:08:22.635 00:12:37 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@290 -- # bdevio_pid=78675 00:08:22.635 00:12:37 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@291 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:08:22.635 00:12:37 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@289 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:08:22.635 00:12:37 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@292 -- # echo 'Process bdevio pid: 78675' 00:08:22.635 00:12:37 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@293 -- # waitforlisten 78675 00:08:22.635 00:12:37 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@827 -- # '[' -z 78675 ']' 00:08:22.635 00:12:37 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:22.635 00:12:37 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@832 -- # local max_retries=100 00:08:22.635 00:12:37 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:22.635 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:22.635 00:12:37 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@836 -- # xtrace_disable 00:08:22.635 00:12:37 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:08:22.635 [2024-07-23 00:12:37.209702] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:08:22.635 [2024-07-23 00:12:37.209824] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78675 ] 00:08:22.893 [2024-07-23 00:12:37.361811] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:22.893 [2024-07-23 00:12:37.405483] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:22.893 [2024-07-23 00:12:37.405568] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:22.893 [2024-07-23 00:12:37.405670] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:23.459 00:12:38 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:08:23.459 00:12:38 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@860 -- # return 0 00:08:23.459 00:12:38 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@294 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:08:23.459 I/O targets: 00:08:23.459 Nvme0n1p1: 774144 blocks of 4096 bytes (3024 MiB) 00:08:23.459 Nvme0n1p2: 774143 blocks of 4096 bytes (3024 MiB) 00:08:23.459 Nvme1n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:08:23.459 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:08:23.459 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:08:23.459 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:08:23.459 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:08:23.459 00:08:23.459 00:08:23.459 CUnit - A unit testing framework for C - Version 2.1-3 00:08:23.459 http://cunit.sourceforge.net/ 00:08:23.459 00:08:23.459 00:08:23.459 Suite: bdevio tests on: Nvme3n1 00:08:23.459 Test: blockdev write read block ...passed 00:08:23.459 Test: blockdev write zeroes read block ...passed 00:08:23.459 Test: blockdev write zeroes read no split ...passed 00:08:23.459 Test: blockdev write zeroes read split ...passed 00:08:23.459 Test: blockdev write zeroes read split partial ...passed 00:08:23.459 Test: blockdev reset ...[2024-07-23 00:12:38.086627] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0] resetting controller 00:08:23.459 passed 00:08:23.459 Test: blockdev write read 8 blocks ...[2024-07-23 00:12:38.088529] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:08:23.459 passed 00:08:23.459 Test: blockdev write read size > 128k ...passed 00:08:23.459 Test: blockdev write read invalid size ...passed 00:08:23.459 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:23.459 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:23.459 Test: blockdev write read max offset ...passed 00:08:23.459 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:23.459 Test: blockdev writev readv 8 blocks ...passed 00:08:23.459 Test: blockdev writev readv 30 x 1block ...passed 00:08:23.459 Test: blockdev writev readv block ...passed 00:08:23.459 Test: blockdev writev readv size > 128k ...passed 00:08:23.459 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:23.459 Test: blockdev comparev and writev ...[2024-07-23 00:12:38.095711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2bce04000 len:0x1000 00:08:23.459 [2024-07-23 00:12:38.095767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:23.459 passed 00:08:23.459 Test: blockdev nvme passthru rw ...passed 00:08:23.459 Test: blockdev nvme passthru vendor specific ...passed 00:08:23.459 Test: blockdev nvme admin passthru ...[2024-07-23 00:12:38.096697] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:08:23.459 [2024-07-23 00:12:38.096736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:08:23.459 passed 00:08:23.459 Test: blockdev copy ...passed 00:08:23.459 Suite: bdevio tests on: Nvme2n3 00:08:23.459 Test: blockdev write read block ...passed 00:08:23.459 Test: blockdev write zeroes read block ...passed 00:08:23.459 Test: blockdev write zeroes read no split ...passed 00:08:23.459 Test: blockdev write zeroes read split ...passed 00:08:23.459 Test: blockdev write zeroes read split partial ...passed 00:08:23.459 Test: blockdev reset ...[2024-07-23 00:12:38.121365] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:08:23.459 passed 00:08:23.459 Test: blockdev write read 8 blocks ...[2024-07-23 00:12:38.123375] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:08:23.459 passed 00:08:23.459 Test: blockdev write read size > 128k ...passed 00:08:23.459 Test: blockdev write read invalid size ...passed 00:08:23.459 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:23.459 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:23.459 Test: blockdev write read max offset ...passed 00:08:23.459 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:23.459 Test: blockdev writev readv 8 blocks ...passed 00:08:23.459 Test: blockdev writev readv 30 x 1block ...passed 00:08:23.459 Test: blockdev writev readv block ...passed 00:08:23.459 Test: blockdev writev readv size > 128k ...passed 00:08:23.459 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:23.459 Test: blockdev comparev and writev ...[2024-07-23 00:12:38.134503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2bce04000 len:0x1000 00:08:23.459 [2024-07-23 00:12:38.134547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:23.459 passed 00:08:23.459 Test: blockdev nvme passthru rw ...passed 00:08:23.459 Test: blockdev nvme passthru vendor specific ...[2024-07-23 00:12:38.137703] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:08:23.459 [2024-07-23 00:12:38.137866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:08:23.459 passed 00:08:23.718 Test: blockdev nvme admin passthru ...passed 00:08:23.718 Test: blockdev copy ...passed 00:08:23.718 Suite: bdevio tests on: Nvme2n2 00:08:23.718 Test: blockdev write read block ...passed 00:08:23.718 Test: blockdev write zeroes read block ...passed 00:08:23.718 Test: blockdev write zeroes read no split ...passed 00:08:23.718 Test: blockdev write zeroes read split ...passed 00:08:23.718 Test: blockdev write zeroes read split partial ...passed 00:08:23.718 Test: blockdev reset ...[2024-07-23 00:12:38.164371] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:08:23.718 [2024-07-23 00:12:38.166545] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:08:23.718 passed 00:08:23.718 Test: blockdev write read 8 blocks ...passed 00:08:23.718 Test: blockdev write read size > 128k ...passed 00:08:23.718 Test: blockdev write read invalid size ...passed 00:08:23.718 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:23.718 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:23.718 Test: blockdev write read max offset ...passed 00:08:23.718 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:23.718 Test: blockdev writev readv 8 blocks ...passed 00:08:23.718 Test: blockdev writev readv 30 x 1block ...passed 00:08:23.718 Test: blockdev writev readv block ...passed 00:08:23.718 Test: blockdev writev readv size > 128k ...passed 00:08:23.718 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:23.718 Test: blockdev comparev and writev ...[2024-07-23 00:12:38.174248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2bfc22000 len:0x1000 00:08:23.718 [2024-07-23 00:12:38.174428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:23.718 passed 00:08:23.718 Test: blockdev nvme passthru rw ...passed 00:08:23.719 Test: blockdev nvme passthru vendor specific ...[2024-07-23 00:12:38.175614] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:08:23.719 passed[2024-07-23 00:12:38.175756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:08:23.719 00:08:23.719 Test: blockdev nvme admin passthru ...passed 00:08:23.719 Test: blockdev copy ...passed 00:08:23.719 Suite: bdevio tests on: Nvme2n1 00:08:23.719 Test: blockdev write read block ...passed 00:08:23.719 Test: blockdev write zeroes read block ...passed 00:08:23.719 Test: blockdev write zeroes read no split ...passed 00:08:23.719 Test: blockdev write zeroes read split ...passed 00:08:23.719 Test: blockdev write zeroes read split partial ...passed 00:08:23.719 Test: blockdev reset ...[2024-07-23 00:12:38.200746] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:08:23.719 [2024-07-23 00:12:38.202838] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:08:23.719 passed 00:08:23.719 Test: blockdev write read 8 blocks ...passed 00:08:23.719 Test: blockdev write read size > 128k ...passed 00:08:23.719 Test: blockdev write read invalid size ...passed 00:08:23.719 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:23.719 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:23.719 Test: blockdev write read max offset ...passed 00:08:23.719 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:23.719 Test: blockdev writev readv 8 blocks ...passed 00:08:23.719 Test: blockdev writev readv 30 x 1block ...passed 00:08:23.719 Test: blockdev writev readv block ...passed 00:08:23.719 Test: blockdev writev readv size > 128k ...passed 00:08:23.719 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:23.719 Test: blockdev comparev and writev ...[2024-07-23 00:12:38.210175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2bce0d000 len:0x1000 00:08:23.719 [2024-07-23 00:12:38.210222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:23.719 passed 00:08:23.719 Test: blockdev nvme passthru rw ...passed 00:08:23.719 Test: blockdev nvme passthru vendor specific ...passed 00:08:23.719 Test: blockdev nvme admin passthru ...[2024-07-23 00:12:38.210989] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:08:23.719 [2024-07-23 00:12:38.211025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:08:23.719 passed 00:08:23.719 Test: blockdev copy ...passed 00:08:23.719 Suite: bdevio tests on: Nvme1n1 00:08:23.719 Test: blockdev write read block ...passed 00:08:23.719 Test: blockdev write zeroes read block ...passed 00:08:23.719 Test: blockdev write zeroes read no split ...passed 00:08:23.719 Test: blockdev write zeroes read split ...passed 00:08:23.719 Test: blockdev write zeroes read split partial ...passed 00:08:23.719 Test: blockdev reset ...[2024-07-23 00:12:38.236406] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0] resetting controller 00:08:23.719 passed 00:08:23.719 Test: blockdev write read 8 blocks ...[2024-07-23 00:12:38.238223] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:08:23.719 passed 00:08:23.719 Test: blockdev write read size > 128k ...passed 00:08:23.719 Test: blockdev write read invalid size ...passed 00:08:23.719 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:23.719 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:23.719 Test: blockdev write read max offset ...passed 00:08:23.719 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:23.719 Test: blockdev writev readv 8 blocks ...passed 00:08:23.719 Test: blockdev writev readv 30 x 1block ...passed 00:08:23.719 Test: blockdev writev readv block ...passed 00:08:23.719 Test: blockdev writev readv size > 128k ...passed 00:08:23.719 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:23.719 Test: blockdev comparev and writev ...[2024-07-23 00:12:38.245515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2bca32000 len:0x1000 00:08:23.719 [2024-07-23 00:12:38.245560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:23.719 passed 00:08:23.719 Test: blockdev nvme passthru rw ...passed 00:08:23.719 Test: blockdev nvme passthru vendor specific ...[2024-07-23 00:12:38.246395] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:08:23.719 [2024-07-23 00:12:38.246429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:08:23.719 passed 00:08:23.719 Test: blockdev nvme admin passthru ...passed 00:08:23.719 Test: blockdev copy ...passed 00:08:23.719 Suite: bdevio tests on: Nvme0n1p2 00:08:23.719 Test: blockdev write read block ...passed 00:08:23.719 Test: blockdev write zeroes read block ...passed 00:08:23.719 Test: blockdev write zeroes read no split ...passed 00:08:23.719 Test: blockdev write zeroes read split ...passed 00:08:23.719 Test: blockdev write zeroes read split partial ...passed 00:08:23.719 Test: blockdev reset ...[2024-07-23 00:12:38.278349] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0] resetting controller 00:08:23.719 [2024-07-23 00:12:38.284363] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:08:23.719 passed 00:08:23.719 Test: blockdev write read 8 blocks ...passed 00:08:23.719 Test: blockdev write read size > 128k ...passed 00:08:23.719 Test: blockdev write read invalid size ...passed 00:08:23.719 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:23.719 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:23.719 Test: blockdev write read max offset ...passed 00:08:23.719 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:23.719 Test: blockdev writev readv 8 blocks ...passed 00:08:23.719 Test: blockdev writev readv 30 x 1block ...passed 00:08:23.719 Test: blockdev writev readv block ...passed 00:08:23.719 Test: blockdev writev readv size > 128k ...passed 00:08:23.719 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:23.719 Test: blockdev comparev and writev ...passed 00:08:23.719 Test: blockdev nvme passthru rw ...passed 00:08:23.719 Test: blockdev nvme passthru vendor specific ...passed 00:08:23.719 Test: blockdev nvme admin passthru ...passed 00:08:23.719 Test: blockdev copy ...[2024-07-23 00:12:38.294588] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1p2 since it has 00:08:23.719 separate metadata which is not supported yet. 00:08:23.719 passed 00:08:23.719 Suite: bdevio tests on: Nvme0n1p1 00:08:23.719 Test: blockdev write read block ...passed 00:08:23.719 Test: blockdev write zeroes read block ...passed 00:08:23.719 Test: blockdev write zeroes read no split ...passed 00:08:23.719 Test: blockdev write zeroes read split ...passed 00:08:23.719 Test: blockdev write zeroes read split partial ...passed 00:08:23.719 Test: blockdev reset ...[2024-07-23 00:12:38.315043] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0] resetting controller 00:08:23.719 [2024-07-23 00:12:38.317458] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:08:23.719 passed 00:08:23.719 Test: blockdev write read 8 blocks ...passed 00:08:23.719 Test: blockdev write read size > 128k ...passed 00:08:23.719 Test: blockdev write read invalid size ...passed 00:08:23.719 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:23.719 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:23.719 Test: blockdev write read max offset ...passed 00:08:23.719 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:23.719 Test: blockdev writev readv 8 blocks ...passed 00:08:23.719 Test: blockdev writev readv 30 x 1block ...passed 00:08:23.719 Test: blockdev writev readv block ...passed 00:08:23.719 Test: blockdev writev readv size > 128k ...passed 00:08:23.719 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:23.719 Test: blockdev comparev and writev ...passed 00:08:23.719 Test: blockdev nvme passthru rw ...passed 00:08:23.719 Test: blockdev nvme passthru vendor specific ...passed 00:08:23.719 Test: blockdev nvme admin passthru ...passed 00:08:23.719 Test: blockdev copy ...[2024-07-23 00:12:38.325300] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1p1 since it has 00:08:23.719 separate metadata which is not supported yet. 00:08:23.719 passed 00:08:23.719 00:08:23.719 Run Summary: Type Total Ran Passed Failed Inactive 00:08:23.719 suites 7 7 n/a 0 0 00:08:23.719 tests 161 161 161 0 0 00:08:23.719 asserts 1006 1006 1006 0 n/a 00:08:23.719 00:08:23.719 Elapsed time = 0.564 seconds 00:08:23.719 0 00:08:23.719 00:12:38 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@295 -- # killprocess 78675 00:08:23.719 00:12:38 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@946 -- # '[' -z 78675 ']' 00:08:23.719 00:12:38 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@950 -- # kill -0 78675 00:08:23.719 00:12:38 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@951 -- # uname 00:08:23.719 00:12:38 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:08:23.719 00:12:38 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 78675 00:08:23.719 00:12:38 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:08:23.719 00:12:38 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:08:23.719 00:12:38 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@964 -- # echo 'killing process with pid 78675' 00:08:23.719 killing process with pid 78675 00:08:23.719 00:12:38 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@965 -- # kill 78675 00:08:23.719 00:12:38 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@970 -- # wait 78675 00:08:23.978 00:12:38 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@296 -- # trap - SIGINT SIGTERM EXIT 00:08:23.978 00:08:23.978 real 0m1.468s 00:08:23.978 user 0m3.464s 00:08:23.978 sys 0m0.344s 00:08:23.978 00:12:38 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:23.978 00:12:38 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:08:23.978 ************************************ 00:08:23.978 END TEST bdev_bounds 00:08:23.978 ************************************ 00:08:23.978 00:12:38 blockdev_nvme_gpt -- bdev/blockdev.sh@762 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1p1 Nvme0n1p2 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:08:23.978 00:12:38 blockdev_nvme_gpt -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:08:23.978 00:12:38 blockdev_nvme_gpt -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:23.978 00:12:38 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:24.237 ************************************ 00:08:24.237 START TEST bdev_nbd 00:08:24.237 ************************************ 00:08:24.237 00:12:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1121 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1p1 Nvme0n1p2 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:08:24.237 00:12:38 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@300 -- # uname -s 00:08:24.237 00:12:38 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@300 -- # [[ Linux == Linux ]] 00:08:24.237 00:12:38 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@302 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:24.237 00:12:38 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:08:24.237 00:12:38 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@304 -- # bdev_all=('Nvme0n1p1' 'Nvme0n1p2' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:24.237 00:12:38 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_all 00:08:24.237 00:12:38 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@305 -- # local bdev_num=7 00:08:24.237 00:12:38 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@309 -- # [[ -e /sys/module/nbd ]] 00:08:24.237 00:12:38 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@311 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:08:24.237 00:12:38 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@311 -- # local nbd_all 00:08:24.237 00:12:38 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@312 -- # bdev_num=7 00:08:24.237 00:12:38 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:08:24.237 00:12:38 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # local nbd_list 00:08:24.237 00:12:38 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@315 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:24.237 00:12:38 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@315 -- # local bdev_list 00:08:24.237 00:12:38 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@318 -- # nbd_pid=78724 00:08:24.237 00:12:38 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@319 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:08:24.237 00:12:38 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@317 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:08:24.237 00:12:38 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@320 -- # waitforlisten 78724 /var/tmp/spdk-nbd.sock 00:08:24.237 00:12:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@827 -- # '[' -z 78724 ']' 00:08:24.237 00:12:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:08:24.237 00:12:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@832 -- # local max_retries=100 00:08:24.237 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:08:24.237 00:12:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:08:24.237 00:12:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@836 -- # xtrace_disable 00:08:24.237 00:12:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:08:24.237 [2024-07-23 00:12:38.764601] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:08:24.237 [2024-07-23 00:12:38.764732] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:24.237 [2024-07-23 00:12:38.916224] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:24.496 [2024-07-23 00:12:38.959152] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:25.064 00:12:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:08:25.064 00:12:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@860 -- # return 0 00:08:25.064 00:12:39 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:08:25.064 00:12:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:25.064 00:12:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:25.064 00:12:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:08:25.064 00:12:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:08:25.064 00:12:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:25.064 00:12:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:25.064 00:12:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:08:25.064 00:12:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:08:25.064 00:12:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:08:25.064 00:12:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:08:25.064 00:12:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:08:25.064 00:12:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p1 00:08:25.323 00:12:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:08:25.323 00:12:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:08:25.323 00:12:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:08:25.323 00:12:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:08:25.323 00:12:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:08:25.323 00:12:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:08:25.323 00:12:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:08:25.323 00:12:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:08:25.323 00:12:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:08:25.323 00:12:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:08:25.323 00:12:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:08:25.323 00:12:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:25.323 1+0 records in 00:08:25.323 1+0 records out 00:08:25.323 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000493946 s, 8.3 MB/s 00:08:25.323 00:12:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:25.323 00:12:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:08:25.323 00:12:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:25.323 00:12:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:08:25.323 00:12:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:08:25.323 00:12:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:25.323 00:12:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:08:25.323 00:12:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p2 00:08:25.323 00:12:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:08:25.323 00:12:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:08:25.323 00:12:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:08:25.323 00:12:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:08:25.323 00:12:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:08:25.323 00:12:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:08:25.323 00:12:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:08:25.323 00:12:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:08:25.323 00:12:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:08:25.323 00:12:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:08:25.323 00:12:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:08:25.323 00:12:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:25.323 1+0 records in 00:08:25.323 1+0 records out 00:08:25.323 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000886868 s, 4.6 MB/s 00:08:25.323 00:12:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:25.582 00:12:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:08:25.582 00:12:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:25.582 00:12:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:08:25.582 00:12:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:08:25.582 00:12:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:25.582 00:12:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:08:25.582 00:12:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 00:08:25.582 00:12:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:08:25.582 00:12:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:08:25.582 00:12:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:08:25.582 00:12:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd2 00:08:25.582 00:12:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:08:25.582 00:12:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:08:25.582 00:12:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:08:25.582 00:12:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd2 /proc/partitions 00:08:25.582 00:12:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:08:25.582 00:12:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:08:25.582 00:12:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:08:25.582 00:12:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:25.582 1+0 records in 00:08:25.582 1+0 records out 00:08:25.582 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000606876 s, 6.7 MB/s 00:08:25.582 00:12:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:25.582 00:12:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:08:25.582 00:12:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:25.582 00:12:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:08:25.582 00:12:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:08:25.582 00:12:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:25.582 00:12:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:08:25.582 00:12:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:08:25.840 00:12:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:08:25.840 00:12:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:08:25.840 00:12:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:08:25.840 00:12:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd3 00:08:25.841 00:12:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:08:25.841 00:12:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:08:25.841 00:12:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:08:25.841 00:12:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd3 /proc/partitions 00:08:25.841 00:12:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:08:25.841 00:12:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:08:25.841 00:12:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:08:25.841 00:12:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:25.841 1+0 records in 00:08:25.841 1+0 records out 00:08:25.841 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000843274 s, 4.9 MB/s 00:08:25.841 00:12:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:25.841 00:12:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:08:25.841 00:12:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:25.841 00:12:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:08:25.841 00:12:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:08:25.841 00:12:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:25.841 00:12:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:08:25.841 00:12:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:08:26.099 00:12:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:08:26.099 00:12:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:08:26.100 00:12:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:08:26.100 00:12:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd4 00:08:26.100 00:12:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:08:26.100 00:12:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:08:26.100 00:12:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:08:26.100 00:12:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd4 /proc/partitions 00:08:26.100 00:12:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:08:26.100 00:12:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:08:26.100 00:12:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:08:26.100 00:12:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:26.100 1+0 records in 00:08:26.100 1+0 records out 00:08:26.100 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00053651 s, 7.6 MB/s 00:08:26.100 00:12:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:26.100 00:12:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:08:26.100 00:12:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:26.100 00:12:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:08:26.100 00:12:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:08:26.100 00:12:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:26.100 00:12:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:08:26.100 00:12:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:08:26.359 00:12:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:08:26.359 00:12:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:08:26.359 00:12:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:08:26.359 00:12:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd5 00:08:26.359 00:12:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:08:26.359 00:12:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:08:26.359 00:12:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:08:26.359 00:12:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd5 /proc/partitions 00:08:26.359 00:12:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:08:26.359 00:12:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:08:26.359 00:12:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:08:26.359 00:12:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:26.359 1+0 records in 00:08:26.359 1+0 records out 00:08:26.359 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000616375 s, 6.6 MB/s 00:08:26.359 00:12:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:26.359 00:12:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:08:26.359 00:12:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:26.359 00:12:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:08:26.359 00:12:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:08:26.359 00:12:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:26.359 00:12:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:08:26.359 00:12:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:08:26.618 00:12:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd6 00:08:26.618 00:12:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd6 00:08:26.618 00:12:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd6 00:08:26.618 00:12:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd6 00:08:26.618 00:12:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:08:26.618 00:12:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:08:26.618 00:12:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:08:26.618 00:12:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd6 /proc/partitions 00:08:26.618 00:12:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:08:26.618 00:12:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:08:26.618 00:12:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:08:26.618 00:12:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd6 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:26.618 1+0 records in 00:08:26.618 1+0 records out 00:08:26.618 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000886295 s, 4.6 MB/s 00:08:26.618 00:12:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:26.618 00:12:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:08:26.618 00:12:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:26.618 00:12:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:08:26.618 00:12:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:08:26.618 00:12:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:26.618 00:12:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:08:26.618 00:12:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:26.877 00:12:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:08:26.877 { 00:08:26.877 "nbd_device": "/dev/nbd0", 00:08:26.877 "bdev_name": "Nvme0n1p1" 00:08:26.877 }, 00:08:26.877 { 00:08:26.877 "nbd_device": "/dev/nbd1", 00:08:26.877 "bdev_name": "Nvme0n1p2" 00:08:26.877 }, 00:08:26.877 { 00:08:26.877 "nbd_device": "/dev/nbd2", 00:08:26.877 "bdev_name": "Nvme1n1" 00:08:26.877 }, 00:08:26.877 { 00:08:26.877 "nbd_device": "/dev/nbd3", 00:08:26.877 "bdev_name": "Nvme2n1" 00:08:26.877 }, 00:08:26.877 { 00:08:26.877 "nbd_device": "/dev/nbd4", 00:08:26.877 "bdev_name": "Nvme2n2" 00:08:26.877 }, 00:08:26.877 { 00:08:26.877 "nbd_device": "/dev/nbd5", 00:08:26.877 "bdev_name": "Nvme2n3" 00:08:26.877 }, 00:08:26.877 { 00:08:26.877 "nbd_device": "/dev/nbd6", 00:08:26.877 "bdev_name": "Nvme3n1" 00:08:26.877 } 00:08:26.877 ]' 00:08:26.877 00:12:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:08:26.877 00:12:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:08:26.877 00:12:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:08:26.877 { 00:08:26.877 "nbd_device": "/dev/nbd0", 00:08:26.877 "bdev_name": "Nvme0n1p1" 00:08:26.877 }, 00:08:26.877 { 00:08:26.877 "nbd_device": "/dev/nbd1", 00:08:26.877 "bdev_name": "Nvme0n1p2" 00:08:26.877 }, 00:08:26.877 { 00:08:26.877 "nbd_device": "/dev/nbd2", 00:08:26.877 "bdev_name": "Nvme1n1" 00:08:26.877 }, 00:08:26.877 { 00:08:26.877 "nbd_device": "/dev/nbd3", 00:08:26.877 "bdev_name": "Nvme2n1" 00:08:26.877 }, 00:08:26.877 { 00:08:26.877 "nbd_device": "/dev/nbd4", 00:08:26.877 "bdev_name": "Nvme2n2" 00:08:26.877 }, 00:08:26.877 { 00:08:26.877 "nbd_device": "/dev/nbd5", 00:08:26.877 "bdev_name": "Nvme2n3" 00:08:26.877 }, 00:08:26.877 { 00:08:26.877 "nbd_device": "/dev/nbd6", 00:08:26.877 "bdev_name": "Nvme3n1" 00:08:26.877 } 00:08:26.877 ]' 00:08:26.877 00:12:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6' 00:08:26.877 00:12:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:26.877 00:12:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6') 00:08:26.877 00:12:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:26.877 00:12:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:08:26.877 00:12:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:26.877 00:12:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:27.137 00:12:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:27.137 00:12:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:27.137 00:12:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:27.137 00:12:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:27.137 00:12:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:27.137 00:12:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:27.137 00:12:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:27.137 00:12:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:27.137 00:12:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:27.137 00:12:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:08:27.137 00:12:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:08:27.137 00:12:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:08:27.137 00:12:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:08:27.137 00:12:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:27.137 00:12:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:27.137 00:12:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:08:27.137 00:12:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:27.137 00:12:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:27.137 00:12:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:27.137 00:12:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:08:27.395 00:12:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:08:27.395 00:12:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:08:27.395 00:12:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:08:27.395 00:12:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:27.395 00:12:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:27.395 00:12:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:08:27.395 00:12:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:27.395 00:12:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:27.395 00:12:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:27.395 00:12:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:08:27.692 00:12:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:08:27.692 00:12:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:08:27.692 00:12:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:08:27.692 00:12:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:27.692 00:12:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:27.692 00:12:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:08:27.692 00:12:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:27.692 00:12:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:27.692 00:12:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:27.692 00:12:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:08:27.970 00:12:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:08:27.970 00:12:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:08:27.970 00:12:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:08:27.970 00:12:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:27.970 00:12:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:27.970 00:12:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:08:27.970 00:12:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:27.970 00:12:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:27.970 00:12:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:27.970 00:12:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:08:27.970 00:12:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:08:27.970 00:12:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:08:27.970 00:12:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:08:27.970 00:12:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:27.970 00:12:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:27.970 00:12:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:08:27.970 00:12:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:27.971 00:12:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:27.971 00:12:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:27.971 00:12:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6 00:08:28.229 00:12:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6 00:08:28.229 00:12:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6 00:08:28.229 00:12:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6 00:08:28.229 00:12:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:28.229 00:12:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:28.229 00:12:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:08:28.229 00:12:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:28.229 00:12:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:28.229 00:12:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:28.229 00:12:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:28.229 00:12:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:28.489 00:12:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:28.489 00:12:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:28.489 00:12:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:28.489 00:12:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:28.489 00:12:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:08:28.489 00:12:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:28.489 00:12:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:08:28.489 00:12:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:08:28.489 00:12:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:08:28.489 00:12:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:08:28.489 00:12:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:08:28.489 00:12:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:08:28.489 00:12:43 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:08:28.489 00:12:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:28.489 00:12:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:28.489 00:12:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:08:28.489 00:12:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:08:28.489 00:12:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:08:28.489 00:12:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:08:28.489 00:12:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:28.489 00:12:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:28.489 00:12:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:08:28.489 00:12:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:08:28.489 00:12:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:08:28.489 00:12:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:08:28.489 00:12:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:08:28.489 00:12:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:08:28.489 00:12:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p1 /dev/nbd0 00:08:28.748 /dev/nbd0 00:08:28.748 00:12:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:08:28.748 00:12:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:08:28.748 00:12:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:08:28.748 00:12:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:08:28.748 00:12:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:08:28.748 00:12:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:08:28.748 00:12:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:08:28.748 00:12:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:08:28.748 00:12:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:08:28.748 00:12:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:08:28.748 00:12:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:28.748 1+0 records in 00:08:28.748 1+0 records out 00:08:28.748 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000551966 s, 7.4 MB/s 00:08:28.748 00:12:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:28.748 00:12:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:08:28.748 00:12:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:28.748 00:12:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:08:28.748 00:12:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:08:28.748 00:12:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:28.748 00:12:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:08:28.748 00:12:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p2 /dev/nbd1 00:08:29.006 /dev/nbd1 00:08:29.006 00:12:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:08:29.007 00:12:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:08:29.007 00:12:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:08:29.007 00:12:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:08:29.007 00:12:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:08:29.007 00:12:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:08:29.007 00:12:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:08:29.007 00:12:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:08:29.007 00:12:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:08:29.007 00:12:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:08:29.007 00:12:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:29.007 1+0 records in 00:08:29.007 1+0 records out 00:08:29.007 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000476565 s, 8.6 MB/s 00:08:29.007 00:12:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:29.007 00:12:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:08:29.007 00:12:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:29.007 00:12:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:08:29.007 00:12:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:08:29.007 00:12:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:29.007 00:12:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:08:29.007 00:12:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 /dev/nbd10 00:08:29.007 /dev/nbd10 00:08:29.266 00:12:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:08:29.266 00:12:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:08:29.266 00:12:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd10 00:08:29.266 00:12:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:08:29.266 00:12:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:08:29.266 00:12:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:08:29.266 00:12:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd10 /proc/partitions 00:08:29.266 00:12:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:08:29.266 00:12:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:08:29.266 00:12:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:08:29.266 00:12:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:29.266 1+0 records in 00:08:29.266 1+0 records out 00:08:29.266 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000672044 s, 6.1 MB/s 00:08:29.266 00:12:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:29.266 00:12:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:08:29.266 00:12:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:29.266 00:12:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:08:29.266 00:12:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:08:29.266 00:12:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:29.266 00:12:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:08:29.266 00:12:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd11 00:08:29.266 /dev/nbd11 00:08:29.266 00:12:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:08:29.266 00:12:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:08:29.266 00:12:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd11 00:08:29.266 00:12:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:08:29.266 00:12:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:08:29.266 00:12:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:08:29.266 00:12:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd11 /proc/partitions 00:08:29.266 00:12:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:08:29.266 00:12:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:08:29.266 00:12:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:08:29.266 00:12:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:29.266 1+0 records in 00:08:29.266 1+0 records out 00:08:29.266 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000526039 s, 7.8 MB/s 00:08:29.266 00:12:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:29.266 00:12:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:08:29.266 00:12:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:29.266 00:12:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:08:29.266 00:12:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:08:29.266 00:12:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:29.266 00:12:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:08:29.266 00:12:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd12 00:08:29.525 /dev/nbd12 00:08:29.525 00:12:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:08:29.525 00:12:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:08:29.525 00:12:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd12 00:08:29.525 00:12:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:08:29.525 00:12:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:08:29.525 00:12:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:08:29.525 00:12:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd12 /proc/partitions 00:08:29.525 00:12:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:08:29.526 00:12:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:08:29.526 00:12:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:08:29.526 00:12:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:29.526 1+0 records in 00:08:29.526 1+0 records out 00:08:29.526 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000503424 s, 8.1 MB/s 00:08:29.526 00:12:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:29.526 00:12:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:08:29.526 00:12:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:29.526 00:12:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:08:29.526 00:12:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:08:29.526 00:12:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:29.526 00:12:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:08:29.526 00:12:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd13 00:08:29.785 /dev/nbd13 00:08:29.785 00:12:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:08:29.785 00:12:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:08:29.785 00:12:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd13 00:08:29.785 00:12:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:08:29.785 00:12:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:08:29.785 00:12:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:08:29.786 00:12:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd13 /proc/partitions 00:08:29.786 00:12:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:08:29.786 00:12:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:08:29.786 00:12:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:08:29.786 00:12:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:29.786 1+0 records in 00:08:29.786 1+0 records out 00:08:29.786 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000558764 s, 7.3 MB/s 00:08:29.786 00:12:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:29.786 00:12:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:08:29.786 00:12:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:29.786 00:12:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:08:29.786 00:12:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:08:29.786 00:12:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:29.786 00:12:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:08:29.786 00:12:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd14 00:08:30.045 /dev/nbd14 00:08:30.045 00:12:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd14 00:08:30.045 00:12:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd14 00:08:30.045 00:12:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd14 00:08:30.045 00:12:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:08:30.045 00:12:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:08:30.045 00:12:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:08:30.045 00:12:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd14 /proc/partitions 00:08:30.045 00:12:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:08:30.045 00:12:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:08:30.045 00:12:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:08:30.045 00:12:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd14 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:30.045 1+0 records in 00:08:30.045 1+0 records out 00:08:30.045 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000811589 s, 5.0 MB/s 00:08:30.045 00:12:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:30.045 00:12:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:08:30.045 00:12:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:30.045 00:12:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:08:30.045 00:12:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:08:30.045 00:12:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:30.045 00:12:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:08:30.045 00:12:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:30.045 00:12:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:30.045 00:12:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:30.304 00:12:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:08:30.304 { 00:08:30.304 "nbd_device": "/dev/nbd0", 00:08:30.304 "bdev_name": "Nvme0n1p1" 00:08:30.304 }, 00:08:30.304 { 00:08:30.304 "nbd_device": "/dev/nbd1", 00:08:30.304 "bdev_name": "Nvme0n1p2" 00:08:30.304 }, 00:08:30.304 { 00:08:30.304 "nbd_device": "/dev/nbd10", 00:08:30.304 "bdev_name": "Nvme1n1" 00:08:30.304 }, 00:08:30.304 { 00:08:30.304 "nbd_device": "/dev/nbd11", 00:08:30.304 "bdev_name": "Nvme2n1" 00:08:30.304 }, 00:08:30.304 { 00:08:30.304 "nbd_device": "/dev/nbd12", 00:08:30.304 "bdev_name": "Nvme2n2" 00:08:30.304 }, 00:08:30.304 { 00:08:30.304 "nbd_device": "/dev/nbd13", 00:08:30.304 "bdev_name": "Nvme2n3" 00:08:30.304 }, 00:08:30.304 { 00:08:30.304 "nbd_device": "/dev/nbd14", 00:08:30.304 "bdev_name": "Nvme3n1" 00:08:30.304 } 00:08:30.304 ]' 00:08:30.304 00:12:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:30.304 00:12:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:08:30.304 { 00:08:30.304 "nbd_device": "/dev/nbd0", 00:08:30.304 "bdev_name": "Nvme0n1p1" 00:08:30.304 }, 00:08:30.304 { 00:08:30.304 "nbd_device": "/dev/nbd1", 00:08:30.304 "bdev_name": "Nvme0n1p2" 00:08:30.304 }, 00:08:30.304 { 00:08:30.304 "nbd_device": "/dev/nbd10", 00:08:30.304 "bdev_name": "Nvme1n1" 00:08:30.304 }, 00:08:30.304 { 00:08:30.304 "nbd_device": "/dev/nbd11", 00:08:30.304 "bdev_name": "Nvme2n1" 00:08:30.304 }, 00:08:30.304 { 00:08:30.304 "nbd_device": "/dev/nbd12", 00:08:30.304 "bdev_name": "Nvme2n2" 00:08:30.304 }, 00:08:30.304 { 00:08:30.305 "nbd_device": "/dev/nbd13", 00:08:30.305 "bdev_name": "Nvme2n3" 00:08:30.305 }, 00:08:30.305 { 00:08:30.305 "nbd_device": "/dev/nbd14", 00:08:30.305 "bdev_name": "Nvme3n1" 00:08:30.305 } 00:08:30.305 ]' 00:08:30.305 00:12:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:08:30.305 /dev/nbd1 00:08:30.305 /dev/nbd10 00:08:30.305 /dev/nbd11 00:08:30.305 /dev/nbd12 00:08:30.305 /dev/nbd13 00:08:30.305 /dev/nbd14' 00:08:30.305 00:12:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:08:30.305 /dev/nbd1 00:08:30.305 /dev/nbd10 00:08:30.305 /dev/nbd11 00:08:30.305 /dev/nbd12 00:08:30.305 /dev/nbd13 00:08:30.305 /dev/nbd14' 00:08:30.305 00:12:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:30.305 00:12:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=7 00:08:30.305 00:12:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 7 00:08:30.305 00:12:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=7 00:08:30.305 00:12:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 7 -ne 7 ']' 00:08:30.305 00:12:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' write 00:08:30.305 00:12:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:08:30.305 00:12:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:30.305 00:12:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:08:30.305 00:12:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:08:30.305 00:12:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:08:30.305 00:12:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:08:30.305 256+0 records in 00:08:30.305 256+0 records out 00:08:30.305 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00530364 s, 198 MB/s 00:08:30.305 00:12:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:30.305 00:12:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:08:30.563 256+0 records in 00:08:30.563 256+0 records out 00:08:30.563 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.13313 s, 7.9 MB/s 00:08:30.563 00:12:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:30.563 00:12:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:08:30.563 256+0 records in 00:08:30.563 256+0 records out 00:08:30.563 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.14232 s, 7.4 MB/s 00:08:30.563 00:12:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:30.563 00:12:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:08:30.822 256+0 records in 00:08:30.822 256+0 records out 00:08:30.822 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.136657 s, 7.7 MB/s 00:08:30.822 00:12:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:30.822 00:12:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:08:30.822 256+0 records in 00:08:30.822 256+0 records out 00:08:30.822 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.13359 s, 7.8 MB/s 00:08:30.822 00:12:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:30.822 00:12:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:08:31.080 256+0 records in 00:08:31.080 256+0 records out 00:08:31.080 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.133693 s, 7.8 MB/s 00:08:31.080 00:12:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:31.080 00:12:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:08:31.080 256+0 records in 00:08:31.080 256+0 records out 00:08:31.080 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.132373 s, 7.9 MB/s 00:08:31.080 00:12:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:31.080 00:12:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd14 bs=4096 count=256 oflag=direct 00:08:31.339 256+0 records in 00:08:31.339 256+0 records out 00:08:31.339 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.136487 s, 7.7 MB/s 00:08:31.339 00:12:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' verify 00:08:31.339 00:12:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:08:31.339 00:12:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:31.339 00:12:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:08:31.339 00:12:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:08:31.339 00:12:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:08:31.339 00:12:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:08:31.339 00:12:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:31.339 00:12:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:08:31.339 00:12:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:31.339 00:12:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:08:31.339 00:12:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:31.339 00:12:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:08:31.339 00:12:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:31.339 00:12:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:08:31.339 00:12:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:31.339 00:12:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:08:31.339 00:12:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:31.339 00:12:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:08:31.339 00:12:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:31.339 00:12:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd14 00:08:31.339 00:12:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:08:31.339 00:12:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:08:31.339 00:12:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:31.339 00:12:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:08:31.339 00:12:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:31.339 00:12:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:08:31.339 00:12:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:31.339 00:12:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:31.598 00:12:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:31.598 00:12:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:31.598 00:12:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:31.598 00:12:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:31.598 00:12:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:31.599 00:12:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:31.599 00:12:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:31.599 00:12:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:31.599 00:12:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:31.599 00:12:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:08:31.858 00:12:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:08:31.858 00:12:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:08:31.858 00:12:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:08:31.858 00:12:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:31.858 00:12:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:31.858 00:12:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:08:31.858 00:12:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:31.858 00:12:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:31.858 00:12:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:31.858 00:12:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:08:31.858 00:12:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:08:31.858 00:12:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:08:31.858 00:12:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:08:31.858 00:12:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:31.858 00:12:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:31.858 00:12:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:08:31.858 00:12:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:31.858 00:12:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:31.858 00:12:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:31.858 00:12:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:08:32.118 00:12:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:08:32.118 00:12:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:08:32.118 00:12:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:08:32.118 00:12:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:32.118 00:12:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:32.118 00:12:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:08:32.118 00:12:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:32.118 00:12:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:32.118 00:12:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:32.118 00:12:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:08:32.377 00:12:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:08:32.377 00:12:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:08:32.377 00:12:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:08:32.377 00:12:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:32.377 00:12:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:32.377 00:12:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:08:32.377 00:12:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:32.377 00:12:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:32.377 00:12:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:32.377 00:12:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:08:32.636 00:12:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:08:32.636 00:12:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:08:32.636 00:12:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:08:32.636 00:12:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:32.636 00:12:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:32.636 00:12:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:08:32.636 00:12:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:32.636 00:12:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:32.636 00:12:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:32.636 00:12:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14 00:08:32.895 00:12:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14 00:08:32.895 00:12:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14 00:08:32.895 00:12:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14 00:08:32.895 00:12:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:32.895 00:12:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:32.895 00:12:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:08:32.895 00:12:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:32.895 00:12:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:32.895 00:12:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:32.895 00:12:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:32.895 00:12:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:32.895 00:12:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:32.895 00:12:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:32.895 00:12:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:33.154 00:12:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:33.154 00:12:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:08:33.154 00:12:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:33.154 00:12:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:08:33.154 00:12:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:08:33.154 00:12:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:08:33.154 00:12:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:08:33.154 00:12:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:08:33.154 00:12:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:08:33.155 00:12:47 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@324 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:08:33.155 00:12:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:33.155 00:12:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:08:33.155 00:12:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd_list 00:08:33.155 00:12:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:08:33.155 00:12:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:08:33.155 malloc_lvol_verify 00:08:33.155 00:12:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:08:33.414 e32f31a9-0482-4cc2-bba2-5b1b07021ae2 00:08:33.414 00:12:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:08:33.672 24fa3cc5-fc29-42c8-8412-81fcd46a2bfb 00:08:33.672 00:12:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:08:33.672 /dev/nbd0 00:08:33.672 00:12:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:08:33.930 mke2fs 1.46.5 (30-Dec-2021) 00:08:33.930 Discarding device blocks: 0/4096 done 00:08:33.930 Creating filesystem with 4096 1k blocks and 1024 inodes 00:08:33.930 00:08:33.930 Allocating group tables: 0/1 done 00:08:33.930 Writing inode tables: 0/1 done 00:08:33.930 Creating journal (1024 blocks): done 00:08:33.930 Writing superblocks and filesystem accounting information: 0/1 done 00:08:33.930 00:08:33.930 00:12:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:08:33.930 00:12:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:08:33.930 00:12:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:33.930 00:12:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:08:33.930 00:12:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:33.930 00:12:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:08:33.930 00:12:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:33.930 00:12:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:33.930 00:12:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:33.930 00:12:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:33.930 00:12:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:33.930 00:12:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:33.930 00:12:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:33.930 00:12:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:33.930 00:12:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:33.930 00:12:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:33.930 00:12:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:08:33.930 00:12:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@147 -- # return 0 00:08:33.930 00:12:48 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@326 -- # killprocess 78724 00:08:33.930 00:12:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@946 -- # '[' -z 78724 ']' 00:08:33.930 00:12:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@950 -- # kill -0 78724 00:08:33.930 00:12:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@951 -- # uname 00:08:33.930 00:12:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:08:33.930 00:12:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 78724 00:08:33.930 00:12:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:08:33.931 killing process with pid 78724 00:08:33.931 00:12:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:08:33.931 00:12:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@964 -- # echo 'killing process with pid 78724' 00:08:33.931 00:12:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@965 -- # kill 78724 00:08:33.931 00:12:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@970 -- # wait 78724 00:08:34.497 00:12:48 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@327 -- # trap - SIGINT SIGTERM EXIT 00:08:34.497 00:08:34.497 real 0m10.214s 00:08:34.497 user 0m13.362s 00:08:34.497 sys 0m4.659s 00:08:34.497 00:12:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:34.497 ************************************ 00:08:34.497 END TEST bdev_nbd 00:08:34.497 ************************************ 00:08:34.497 00:12:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:08:34.497 00:12:48 blockdev_nvme_gpt -- bdev/blockdev.sh@763 -- # [[ y == y ]] 00:08:34.497 00:12:48 blockdev_nvme_gpt -- bdev/blockdev.sh@764 -- # '[' gpt = nvme ']' 00:08:34.497 00:12:48 blockdev_nvme_gpt -- bdev/blockdev.sh@764 -- # '[' gpt = gpt ']' 00:08:34.497 skipping fio tests on NVMe due to multi-ns failures. 00:08:34.497 00:12:48 blockdev_nvme_gpt -- bdev/blockdev.sh@766 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:08:34.497 00:12:48 blockdev_nvme_gpt -- bdev/blockdev.sh@775 -- # trap cleanup SIGINT SIGTERM EXIT 00:08:34.497 00:12:48 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:08:34.497 00:12:48 blockdev_nvme_gpt -- common/autotest_common.sh@1097 -- # '[' 16 -le 1 ']' 00:08:34.497 00:12:48 blockdev_nvme_gpt -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:34.497 00:12:48 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:34.497 ************************************ 00:08:34.497 START TEST bdev_verify 00:08:34.497 ************************************ 00:08:34.497 00:12:48 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:08:34.497 [2024-07-23 00:12:49.045304] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:08:34.497 [2024-07-23 00:12:49.045439] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79126 ] 00:08:34.756 [2024-07-23 00:12:49.197011] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:34.756 [2024-07-23 00:12:49.240696] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:34.756 [2024-07-23 00:12:49.240803] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:35.014 Running I/O for 5 seconds... 00:08:40.305 00:08:40.305 Latency(us) 00:08:40.305 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:40.305 Job: Nvme0n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:40.305 Verification LBA range: start 0x0 length 0x5e800 00:08:40.305 Nvme0n1p1 : 5.04 1472.15 5.75 0.00 0.00 86637.29 20002.96 93066.38 00:08:40.305 Job: Nvme0n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:40.305 Verification LBA range: start 0x5e800 length 0x5e800 00:08:40.305 Nvme0n1p1 : 5.04 1497.29 5.85 0.00 0.00 85163.82 18002.66 87170.78 00:08:40.305 Job: Nvme0n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:40.305 Verification LBA range: start 0x0 length 0x5e7ff 00:08:40.305 Nvme0n1p2 : 5.04 1471.76 5.75 0.00 0.00 86513.41 22424.37 90539.69 00:08:40.305 Job: Nvme0n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:40.305 Verification LBA range: start 0x5e7ff length 0x5e7ff 00:08:40.305 Nvme0n1p2 : 5.08 1499.47 5.86 0.00 0.00 84814.72 10896.35 80432.94 00:08:40.305 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:40.305 Verification LBA range: start 0x0 length 0xa0000 00:08:40.305 Nvme1n1 : 5.07 1475.97 5.77 0.00 0.00 86079.92 9159.25 79169.59 00:08:40.305 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:40.305 Verification LBA range: start 0xa0000 length 0xa0000 00:08:40.305 Nvme1n1 : 5.09 1508.48 5.89 0.00 0.00 84254.59 8948.69 63588.34 00:08:40.305 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:40.305 Verification LBA range: start 0x0 length 0x80000 00:08:40.305 Nvme2n1 : 5.09 1483.94 5.80 0.00 0.00 85625.30 10948.99 75800.67 00:08:40.305 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:40.305 Verification LBA range: start 0x80000 length 0x80000 00:08:40.305 Nvme2n1 : 5.09 1508.04 5.89 0.00 0.00 84125.57 8948.69 59798.31 00:08:40.305 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:40.305 Verification LBA range: start 0x0 length 0x80000 00:08:40.305 Nvme2n2 : 5.09 1483.54 5.80 0.00 0.00 85538.50 11528.02 81275.17 00:08:40.305 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:40.305 Verification LBA range: start 0x80000 length 0x80000 00:08:40.305 Nvme2n2 : 5.09 1507.60 5.89 0.00 0.00 84000.78 8896.05 61482.77 00:08:40.305 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:40.305 Verification LBA range: start 0x0 length 0x80000 00:08:40.305 Nvme2n3 : 5.09 1483.21 5.79 0.00 0.00 85418.70 10369.95 86749.66 00:08:40.305 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:40.305 Verification LBA range: start 0x80000 length 0x80000 00:08:40.305 Nvme2n3 : 5.10 1507.26 5.89 0.00 0.00 83868.86 8896.05 63588.34 00:08:40.305 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:40.305 Verification LBA range: start 0x0 length 0x20000 00:08:40.305 Nvme3n1 : 5.09 1482.77 5.79 0.00 0.00 85303.49 10054.12 92224.15 00:08:40.305 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:40.305 Verification LBA range: start 0x20000 length 0x20000 00:08:40.305 Nvme3n1 : 5.10 1506.91 5.89 0.00 0.00 83784.70 8948.69 64430.57 00:08:40.305 =================================================================================================================== 00:08:40.305 Total : 20888.38 81.60 0.00 0.00 85069.84 8896.05 93066.38 00:08:40.874 00:08:40.874 real 0m6.314s 00:08:40.874 user 0m11.774s 00:08:40.874 sys 0m0.272s 00:08:40.874 00:12:55 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:40.874 00:12:55 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:08:40.874 ************************************ 00:08:40.874 END TEST bdev_verify 00:08:40.874 ************************************ 00:08:40.874 00:12:55 blockdev_nvme_gpt -- bdev/blockdev.sh@778 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:08:40.874 00:12:55 blockdev_nvme_gpt -- common/autotest_common.sh@1097 -- # '[' 16 -le 1 ']' 00:08:40.874 00:12:55 blockdev_nvme_gpt -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:40.874 00:12:55 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:40.874 ************************************ 00:08:40.874 START TEST bdev_verify_big_io 00:08:40.874 ************************************ 00:08:40.874 00:12:55 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:08:40.874 [2024-07-23 00:12:55.437816] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:08:40.874 [2024-07-23 00:12:55.437964] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79212 ] 00:08:41.132 [2024-07-23 00:12:55.587735] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:41.132 [2024-07-23 00:12:55.631124] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:41.132 [2024-07-23 00:12:55.631237] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:41.699 Running I/O for 5 seconds... 00:08:48.268 00:08:48.268 Latency(us) 00:08:48.269 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:48.269 Job: Nvme0n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:48.269 Verification LBA range: start 0x0 length 0x5e80 00:08:48.269 Nvme0n1p1 : 5.74 101.31 6.33 0.00 0.00 1226798.39 29899.16 1158908.09 00:08:48.269 Job: Nvme0n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:48.269 Verification LBA range: start 0x5e80 length 0x5e80 00:08:48.269 Nvme0n1p1 : 5.59 177.62 11.10 0.00 0.00 696863.48 30530.83 923083.77 00:08:48.269 Job: Nvme0n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:48.269 Verification LBA range: start 0x0 length 0x5e7f 00:08:48.269 Nvme0n1p2 : 5.75 101.62 6.35 0.00 0.00 1194882.05 63588.34 1367781.06 00:08:48.269 Job: Nvme0n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:48.269 Verification LBA range: start 0x5e7f length 0x5e7f 00:08:48.269 Nvme0n1p2 : 5.59 174.70 10.92 0.00 0.00 689347.12 80854.05 1078054.04 00:08:48.269 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:48.269 Verification LBA range: start 0x0 length 0xa000 00:08:48.269 Nvme1n1 : 5.77 111.74 6.98 0.00 0.00 1074862.63 17160.43 1367781.06 00:08:48.269 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:48.269 Verification LBA range: start 0xa000 length 0xa000 00:08:48.269 Nvme1n1 : 5.59 199.81 12.49 0.00 0.00 597596.10 77906.25 660308.10 00:08:48.269 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:48.269 Verification LBA range: start 0x0 length 0x8000 00:08:48.269 Nvme2n1 : 5.77 111.55 6.97 0.00 0.00 1052347.56 17265.71 1367781.06 00:08:48.269 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:48.269 Verification LBA range: start 0x8000 length 0x8000 00:08:48.269 Nvme2n1 : 5.69 199.89 12.49 0.00 0.00 581278.51 54323.82 909608.10 00:08:48.269 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:48.269 Verification LBA range: start 0x0 length 0x8000 00:08:48.269 Nvme2n2 : 5.77 110.94 6.93 0.00 0.00 1035157.90 17160.43 1280189.17 00:08:48.269 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:48.269 Verification LBA range: start 0x8000 length 0x8000 00:08:48.269 Nvme2n2 : 5.69 199.12 12.44 0.00 0.00 574610.85 39584.80 1226286.47 00:08:48.269 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:48.269 Verification LBA range: start 0x0 length 0x8000 00:08:48.269 Nvme2n3 : 5.78 116.19 7.26 0.00 0.00 967781.49 3868.99 1421683.77 00:08:48.269 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:48.269 Verification LBA range: start 0x8000 length 0x8000 00:08:48.269 Nvme2n3 : 5.76 208.88 13.06 0.00 0.00 536208.04 26214.40 1246499.98 00:08:48.269 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:48.269 Verification LBA range: start 0x0 length 0x2000 00:08:48.269 Nvme3n1 : 5.78 121.65 7.60 0.00 0.00 904161.07 1408.10 1421683.77 00:08:48.269 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:48.269 Verification LBA range: start 0x2000 length 0x2000 00:08:48.269 Nvme3n1 : 5.76 224.81 14.05 0.00 0.00 488315.08 1454.16 1253237.82 00:08:48.269 =================================================================================================================== 00:08:48.269 Total : 2159.82 134.99 0.00 0.00 758888.44 1408.10 1421683.77 00:08:48.269 00:08:48.269 real 0m7.266s 00:08:48.269 user 0m13.628s 00:08:48.269 sys 0m0.319s 00:08:48.269 00:13:02 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:48.269 00:13:02 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:08:48.269 ************************************ 00:08:48.269 END TEST bdev_verify_big_io 00:08:48.269 ************************************ 00:08:48.269 00:13:02 blockdev_nvme_gpt -- bdev/blockdev.sh@779 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:48.269 00:13:02 blockdev_nvme_gpt -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:08:48.269 00:13:02 blockdev_nvme_gpt -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:48.269 00:13:02 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:48.269 ************************************ 00:08:48.269 START TEST bdev_write_zeroes 00:08:48.269 ************************************ 00:08:48.269 00:13:02 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:48.269 [2024-07-23 00:13:02.780139] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:08:48.269 [2024-07-23 00:13:02.780304] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79306 ] 00:08:48.269 [2024-07-23 00:13:02.931399] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:48.528 [2024-07-23 00:13:02.974544] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:48.784 Running I/O for 1 seconds... 00:08:50.155 00:08:50.155 Latency(us) 00:08:50.155 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:50.155 Job: Nvme0n1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:50.155 Nvme0n1p1 : 1.01 9792.92 38.25 0.00 0.00 13031.45 7053.67 29688.60 00:08:50.155 Job: Nvme0n1p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:50.155 Nvme0n1p2 : 1.01 9782.17 38.21 0.00 0.00 13026.99 7422.15 29688.60 00:08:50.155 Job: Nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:50.155 Nvme1n1 : 1.02 9773.35 38.18 0.00 0.00 13011.88 10317.31 28425.25 00:08:50.155 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:50.155 Nvme2n1 : 1.02 9807.58 38.31 0.00 0.00 12947.72 7580.07 25898.56 00:08:50.155 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:50.155 Nvme2n2 : 1.02 9797.91 38.27 0.00 0.00 12925.27 7790.62 24740.50 00:08:50.155 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:50.155 Nvme2n3 : 1.02 9837.04 38.43 0.00 0.00 12800.69 4316.43 20108.23 00:08:50.155 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:50.155 Nvme3n1 : 1.02 9876.20 38.58 0.00 0.00 12712.61 3250.48 18739.61 00:08:50.155 =================================================================================================================== 00:08:50.155 Total : 68667.17 268.23 0.00 0.00 12921.60 3250.48 29688.60 00:08:50.155 00:08:50.155 real 0m1.946s 00:08:50.155 user 0m1.611s 00:08:50.155 sys 0m0.228s 00:08:50.155 00:13:04 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:50.155 00:13:04 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:08:50.155 ************************************ 00:08:50.155 END TEST bdev_write_zeroes 00:08:50.155 ************************************ 00:08:50.155 00:13:04 blockdev_nvme_gpt -- bdev/blockdev.sh@782 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:50.155 00:13:04 blockdev_nvme_gpt -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:08:50.155 00:13:04 blockdev_nvme_gpt -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:50.155 00:13:04 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:50.155 ************************************ 00:08:50.155 START TEST bdev_json_nonenclosed 00:08:50.155 ************************************ 00:08:50.155 00:13:04 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:50.155 [2024-07-23 00:13:04.802716] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:08:50.155 [2024-07-23 00:13:04.802838] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79348 ] 00:08:50.415 [2024-07-23 00:13:04.953636] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:50.415 [2024-07-23 00:13:04.995733] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:50.415 [2024-07-23 00:13:04.995815] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:08:50.415 [2024-07-23 00:13:04.995840] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:08:50.415 [2024-07-23 00:13:04.995853] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:50.674 00:08:50.674 real 0m0.380s 00:08:50.674 user 0m0.149s 00:08:50.674 sys 0m0.128s 00:08:50.674 00:13:05 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:50.674 00:13:05 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:08:50.674 ************************************ 00:08:50.674 END TEST bdev_json_nonenclosed 00:08:50.674 ************************************ 00:08:50.674 00:13:05 blockdev_nvme_gpt -- bdev/blockdev.sh@785 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:50.674 00:13:05 blockdev_nvme_gpt -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:08:50.674 00:13:05 blockdev_nvme_gpt -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:50.674 00:13:05 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:50.674 ************************************ 00:08:50.674 START TEST bdev_json_nonarray 00:08:50.674 ************************************ 00:08:50.674 00:13:05 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:50.674 [2024-07-23 00:13:05.258582] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:08:50.674 [2024-07-23 00:13:05.258705] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79373 ] 00:08:50.934 [2024-07-23 00:13:05.410176] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:50.934 [2024-07-23 00:13:05.451714] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:50.934 [2024-07-23 00:13:05.451832] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:08:50.934 [2024-07-23 00:13:05.451871] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:08:50.934 [2024-07-23 00:13:05.451891] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:50.934 00:08:50.934 real 0m0.384s 00:08:50.934 user 0m0.158s 00:08:50.934 sys 0m0.121s 00:08:50.934 00:13:05 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:50.934 00:13:05 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:08:50.934 ************************************ 00:08:50.934 END TEST bdev_json_nonarray 00:08:50.934 ************************************ 00:08:51.193 00:13:05 blockdev_nvme_gpt -- bdev/blockdev.sh@787 -- # [[ gpt == bdev ]] 00:08:51.193 00:13:05 blockdev_nvme_gpt -- bdev/blockdev.sh@794 -- # [[ gpt == gpt ]] 00:08:51.193 00:13:05 blockdev_nvme_gpt -- bdev/blockdev.sh@795 -- # run_test bdev_gpt_uuid bdev_gpt_uuid 00:08:51.193 00:13:05 blockdev_nvme_gpt -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:08:51.193 00:13:05 blockdev_nvme_gpt -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:51.193 00:13:05 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:51.193 ************************************ 00:08:51.193 START TEST bdev_gpt_uuid 00:08:51.193 ************************************ 00:08:51.193 00:13:05 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1121 -- # bdev_gpt_uuid 00:08:51.193 00:13:05 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@614 -- # local bdev 00:08:51.193 00:13:05 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@616 -- # start_spdk_tgt 00:08:51.193 00:13:05 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=79399 00:08:51.193 00:13:05 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:08:51.193 00:13:05 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:08:51.193 00:13:05 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@49 -- # waitforlisten 79399 00:08:51.193 00:13:05 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@827 -- # '[' -z 79399 ']' 00:08:51.193 00:13:05 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:51.193 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:51.193 00:13:05 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@832 -- # local max_retries=100 00:08:51.193 00:13:05 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:51.193 00:13:05 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@836 -- # xtrace_disable 00:08:51.193 00:13:05 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:08:51.193 [2024-07-23 00:13:05.725078] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:08:51.193 [2024-07-23 00:13:05.725232] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79399 ] 00:08:51.193 [2024-07-23 00:13:05.870674] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:51.452 [2024-07-23 00:13:05.912234] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:52.020 00:13:06 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:08:52.020 00:13:06 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@860 -- # return 0 00:08:52.020 00:13:06 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@618 -- # rpc_cmd load_config -j /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:08:52.020 00:13:06 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:52.020 00:13:06 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:08:52.279 Some configs were skipped because the RPC state that can call them passed over. 00:08:52.279 00:13:06 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:52.279 00:13:06 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@619 -- # rpc_cmd bdev_wait_for_examine 00:08:52.279 00:13:06 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:52.279 00:13:06 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:08:52.279 00:13:06 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:52.279 00:13:06 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@621 -- # rpc_cmd bdev_get_bdevs -b 6f89f330-603b-4116-ac73-2ca8eae53030 00:08:52.279 00:13:06 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:52.279 00:13:06 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:08:52.279 00:13:06 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:52.279 00:13:06 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@621 -- # bdev='[ 00:08:52.279 { 00:08:52.279 "name": "Nvme0n1p1", 00:08:52.279 "aliases": [ 00:08:52.279 "6f89f330-603b-4116-ac73-2ca8eae53030" 00:08:52.279 ], 00:08:52.279 "product_name": "GPT Disk", 00:08:52.279 "block_size": 4096, 00:08:52.279 "num_blocks": 774144, 00:08:52.279 "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:08:52.279 "md_size": 64, 00:08:52.279 "md_interleave": false, 00:08:52.279 "dif_type": 0, 00:08:52.279 "assigned_rate_limits": { 00:08:52.279 "rw_ios_per_sec": 0, 00:08:52.279 "rw_mbytes_per_sec": 0, 00:08:52.279 "r_mbytes_per_sec": 0, 00:08:52.279 "w_mbytes_per_sec": 0 00:08:52.279 }, 00:08:52.279 "claimed": false, 00:08:52.279 "zoned": false, 00:08:52.279 "supported_io_types": { 00:08:52.279 "read": true, 00:08:52.279 "write": true, 00:08:52.279 "unmap": true, 00:08:52.279 "write_zeroes": true, 00:08:52.279 "flush": true, 00:08:52.280 "reset": true, 00:08:52.280 "compare": true, 00:08:52.280 "compare_and_write": false, 00:08:52.280 "abort": true, 00:08:52.280 "nvme_admin": false, 00:08:52.280 "nvme_io": false 00:08:52.280 }, 00:08:52.280 "driver_specific": { 00:08:52.280 "gpt": { 00:08:52.280 "base_bdev": "Nvme0n1", 00:08:52.280 "offset_blocks": 256, 00:08:52.280 "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b", 00:08:52.280 "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:08:52.280 "partition_name": "SPDK_TEST_first" 00:08:52.280 } 00:08:52.280 } 00:08:52.280 } 00:08:52.280 ]' 00:08:52.280 00:13:06 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@622 -- # jq -r length 00:08:52.280 00:13:06 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@622 -- # [[ 1 == \1 ]] 00:08:52.280 00:13:06 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@623 -- # jq -r '.[0].aliases[0]' 00:08:52.280 00:13:06 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@623 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:08:52.280 00:13:06 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@624 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:08:52.539 00:13:06 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@624 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:08:52.539 00:13:06 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@626 -- # rpc_cmd bdev_get_bdevs -b abf1734f-66e5-4c0f-aa29-4021d4d307df 00:08:52.539 00:13:06 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:52.539 00:13:06 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:08:52.539 00:13:06 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:52.539 00:13:07 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@626 -- # bdev='[ 00:08:52.539 { 00:08:52.539 "name": "Nvme0n1p2", 00:08:52.539 "aliases": [ 00:08:52.539 "abf1734f-66e5-4c0f-aa29-4021d4d307df" 00:08:52.539 ], 00:08:52.539 "product_name": "GPT Disk", 00:08:52.539 "block_size": 4096, 00:08:52.539 "num_blocks": 774143, 00:08:52.539 "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:08:52.539 "md_size": 64, 00:08:52.539 "md_interleave": false, 00:08:52.539 "dif_type": 0, 00:08:52.539 "assigned_rate_limits": { 00:08:52.539 "rw_ios_per_sec": 0, 00:08:52.539 "rw_mbytes_per_sec": 0, 00:08:52.539 "r_mbytes_per_sec": 0, 00:08:52.539 "w_mbytes_per_sec": 0 00:08:52.539 }, 00:08:52.539 "claimed": false, 00:08:52.539 "zoned": false, 00:08:52.539 "supported_io_types": { 00:08:52.539 "read": true, 00:08:52.539 "write": true, 00:08:52.539 "unmap": true, 00:08:52.539 "write_zeroes": true, 00:08:52.539 "flush": true, 00:08:52.539 "reset": true, 00:08:52.539 "compare": true, 00:08:52.539 "compare_and_write": false, 00:08:52.539 "abort": true, 00:08:52.539 "nvme_admin": false, 00:08:52.539 "nvme_io": false 00:08:52.539 }, 00:08:52.539 "driver_specific": { 00:08:52.539 "gpt": { 00:08:52.539 "base_bdev": "Nvme0n1", 00:08:52.539 "offset_blocks": 774400, 00:08:52.539 "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c", 00:08:52.539 "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:08:52.539 "partition_name": "SPDK_TEST_second" 00:08:52.539 } 00:08:52.539 } 00:08:52.539 } 00:08:52.539 ]' 00:08:52.539 00:13:07 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@627 -- # jq -r length 00:08:52.539 00:13:07 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@627 -- # [[ 1 == \1 ]] 00:08:52.539 00:13:07 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@628 -- # jq -r '.[0].aliases[0]' 00:08:52.539 00:13:07 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@628 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:08:52.539 00:13:07 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@629 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:08:52.539 00:13:07 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@629 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:08:52.539 00:13:07 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@631 -- # killprocess 79399 00:08:52.539 00:13:07 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@946 -- # '[' -z 79399 ']' 00:08:52.539 00:13:07 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@950 -- # kill -0 79399 00:08:52.539 00:13:07 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@951 -- # uname 00:08:52.539 00:13:07 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:08:52.539 00:13:07 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 79399 00:08:52.539 00:13:07 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:08:52.539 00:13:07 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:08:52.539 killing process with pid 79399 00:08:52.539 00:13:07 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@964 -- # echo 'killing process with pid 79399' 00:08:52.539 00:13:07 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@965 -- # kill 79399 00:08:52.539 00:13:07 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@970 -- # wait 79399 00:08:53.107 ************************************ 00:08:53.107 END TEST bdev_gpt_uuid 00:08:53.107 ************************************ 00:08:53.107 00:08:53.107 real 0m1.901s 00:08:53.107 user 0m2.006s 00:08:53.107 sys 0m0.453s 00:08:53.107 00:13:07 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:53.107 00:13:07 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:08:53.107 00:13:07 blockdev_nvme_gpt -- bdev/blockdev.sh@798 -- # [[ gpt == crypto_sw ]] 00:08:53.107 00:13:07 blockdev_nvme_gpt -- bdev/blockdev.sh@810 -- # trap - SIGINT SIGTERM EXIT 00:08:53.107 00:13:07 blockdev_nvme_gpt -- bdev/blockdev.sh@811 -- # cleanup 00:08:53.107 00:13:07 blockdev_nvme_gpt -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:08:53.107 00:13:07 blockdev_nvme_gpt -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:08:53.107 00:13:07 blockdev_nvme_gpt -- bdev/blockdev.sh@26 -- # [[ gpt == rbd ]] 00:08:53.107 00:13:07 blockdev_nvme_gpt -- bdev/blockdev.sh@30 -- # [[ gpt == daos ]] 00:08:53.107 00:13:07 blockdev_nvme_gpt -- bdev/blockdev.sh@34 -- # [[ gpt = \g\p\t ]] 00:08:53.107 00:13:07 blockdev_nvme_gpt -- bdev/blockdev.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:08:53.675 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:53.932 Waiting for block devices as requested 00:08:53.932 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:08:53.932 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:08:54.191 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:08:54.191 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:08:59.458 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:08:59.458 00:13:13 blockdev_nvme_gpt -- bdev/blockdev.sh@36 -- # [[ -b /dev/nvme1n1 ]] 00:08:59.458 00:13:13 blockdev_nvme_gpt -- bdev/blockdev.sh@37 -- # wipefs --all /dev/nvme1n1 00:08:59.458 /dev/nvme1n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:08:59.458 /dev/nvme1n1: 8 bytes were erased at offset 0x17a179000 (gpt): 45 46 49 20 50 41 52 54 00:08:59.458 /dev/nvme1n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:08:59.458 /dev/nvme1n1: calling ioctl to re-read partition table: Success 00:08:59.458 00:13:14 blockdev_nvme_gpt -- bdev/blockdev.sh@40 -- # [[ gpt == xnvme ]] 00:08:59.458 00:08:59.458 real 0m49.915s 00:08:59.458 user 1m0.228s 00:08:59.458 sys 0m10.817s 00:08:59.458 ************************************ 00:08:59.458 00:13:14 blockdev_nvme_gpt -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:59.458 00:13:14 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:59.458 END TEST blockdev_nvme_gpt 00:08:59.458 ************************************ 00:08:59.717 00:13:14 -- spdk/autotest.sh@216 -- # run_test nvme /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:08:59.717 00:13:14 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:08:59.717 00:13:14 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:59.717 00:13:14 -- common/autotest_common.sh@10 -- # set +x 00:08:59.717 ************************************ 00:08:59.717 START TEST nvme 00:08:59.717 ************************************ 00:08:59.717 00:13:14 nvme -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:08:59.717 * Looking for test storage... 00:08:59.717 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:08:59.717 00:13:14 nvme -- nvme/nvme.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:00.653 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:01.248 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:09:01.248 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:09:01.248 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:09:01.248 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:09:01.524 00:13:15 nvme -- nvme/nvme.sh@79 -- # uname 00:09:01.524 00:13:15 nvme -- nvme/nvme.sh@79 -- # '[' Linux = Linux ']' 00:09:01.524 00:13:15 nvme -- nvme/nvme.sh@80 -- # trap 'kill_stub -9; exit 1' SIGINT SIGTERM EXIT 00:09:01.524 00:13:15 nvme -- nvme/nvme.sh@81 -- # start_stub '-s 4096 -i 0 -m 0xE' 00:09:01.524 00:13:15 nvme -- common/autotest_common.sh@1078 -- # _start_stub '-s 4096 -i 0 -m 0xE' 00:09:01.524 00:13:15 nvme -- common/autotest_common.sh@1064 -- # _randomize_va_space=2 00:09:01.524 00:13:15 nvme -- common/autotest_common.sh@1065 -- # echo 0 00:09:01.524 00:13:15 nvme -- common/autotest_common.sh@1066 -- # /home/vagrant/spdk_repo/spdk/test/app/stub/stub -s 4096 -i 0 -m 0xE 00:09:01.524 00:13:15 nvme -- common/autotest_common.sh@1067 -- # stubpid=80029 00:09:01.524 00:13:15 nvme -- common/autotest_common.sh@1068 -- # echo Waiting for stub to ready for secondary processes... 00:09:01.524 Waiting for stub to ready for secondary processes... 00:09:01.524 00:13:15 nvme -- common/autotest_common.sh@1069 -- # '[' -e /var/run/spdk_stub0 ']' 00:09:01.524 00:13:15 nvme -- common/autotest_common.sh@1071 -- # [[ -e /proc/80029 ]] 00:09:01.524 00:13:15 nvme -- common/autotest_common.sh@1072 -- # sleep 1s 00:09:01.524 [2024-07-23 00:13:15.986510] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:09:01.524 [2024-07-23 00:13:15.986635] [ DPDK EAL parameters: stub -c 0xE -m 4096 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto --proc-type=primary ] 00:09:02.462 00:13:16 nvme -- common/autotest_common.sh@1069 -- # '[' -e /var/run/spdk_stub0 ']' 00:09:02.462 00:13:16 nvme -- common/autotest_common.sh@1071 -- # [[ -e /proc/80029 ]] 00:09:02.462 00:13:16 nvme -- common/autotest_common.sh@1072 -- # sleep 1s 00:09:02.462 [2024-07-23 00:13:16.974377] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:02.462 [2024-07-23 00:13:17.003166] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:02.462 [2024-07-23 00:13:17.003253] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:02.462 [2024-07-23 00:13:17.003391] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:09:02.462 [2024-07-23 00:13:17.014960] nvme_cuse.c:1408:start_cuse_thread: *NOTICE*: Successfully started cuse thread to poll for admin commands 00:09:02.462 [2024-07-23 00:13:17.014997] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:09:02.462 [2024-07-23 00:13:17.030722] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0 created 00:09:02.462 [2024-07-23 00:13:17.031095] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0n1 created 00:09:02.462 [2024-07-23 00:13:17.031788] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:09:02.462 [2024-07-23 00:13:17.031974] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1 created 00:09:02.462 [2024-07-23 00:13:17.032039] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1n1 created 00:09:02.462 [2024-07-23 00:13:17.032704] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:09:02.462 [2024-07-23 00:13:17.032906] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2 created 00:09:02.462 [2024-07-23 00:13:17.032959] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2n1 created 00:09:02.462 [2024-07-23 00:13:17.033673] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:09:02.462 [2024-07-23 00:13:17.033881] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3 created 00:09:02.462 [2024-07-23 00:13:17.034004] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n1 created 00:09:02.462 [2024-07-23 00:13:17.034063] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n2 created 00:09:02.462 [2024-07-23 00:13:17.034119] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n3 created 00:09:03.399 done. 00:09:03.399 00:13:17 nvme -- common/autotest_common.sh@1069 -- # '[' -e /var/run/spdk_stub0 ']' 00:09:03.399 00:13:17 nvme -- common/autotest_common.sh@1074 -- # echo done. 00:09:03.399 00:13:17 nvme -- nvme/nvme.sh@84 -- # run_test nvme_reset /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:09:03.399 00:13:17 nvme -- common/autotest_common.sh@1097 -- # '[' 10 -le 1 ']' 00:09:03.399 00:13:17 nvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:03.399 00:13:17 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:03.399 ************************************ 00:09:03.399 START TEST nvme_reset 00:09:03.399 ************************************ 00:09:03.399 00:13:17 nvme.nvme_reset -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:09:03.658 Initializing NVMe Controllers 00:09:03.658 Skipping QEMU NVMe SSD at 0000:00:10.0 00:09:03.658 Skipping QEMU NVMe SSD at 0000:00:11.0 00:09:03.659 Skipping QEMU NVMe SSD at 0000:00:13.0 00:09:03.659 Skipping QEMU NVMe SSD at 0000:00:12.0 00:09:03.659 No NVMe controller found, /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset exiting 00:09:03.659 ************************************ 00:09:03.659 END TEST nvme_reset 00:09:03.659 ************************************ 00:09:03.659 00:09:03.659 real 0m0.241s 00:09:03.659 user 0m0.081s 00:09:03.659 sys 0m0.114s 00:09:03.659 00:13:18 nvme.nvme_reset -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:03.659 00:13:18 nvme.nvme_reset -- common/autotest_common.sh@10 -- # set +x 00:09:03.659 00:13:18 nvme -- nvme/nvme.sh@85 -- # run_test nvme_identify nvme_identify 00:09:03.659 00:13:18 nvme -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:09:03.659 00:13:18 nvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:03.659 00:13:18 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:03.659 ************************************ 00:09:03.659 START TEST nvme_identify 00:09:03.659 ************************************ 00:09:03.659 00:13:18 nvme.nvme_identify -- common/autotest_common.sh@1121 -- # nvme_identify 00:09:03.659 00:13:18 nvme.nvme_identify -- nvme/nvme.sh@12 -- # bdfs=() 00:09:03.659 00:13:18 nvme.nvme_identify -- nvme/nvme.sh@12 -- # local bdfs bdf 00:09:03.659 00:13:18 nvme.nvme_identify -- nvme/nvme.sh@13 -- # bdfs=($(get_nvme_bdfs)) 00:09:03.659 00:13:18 nvme.nvme_identify -- nvme/nvme.sh@13 -- # get_nvme_bdfs 00:09:03.659 00:13:18 nvme.nvme_identify -- common/autotest_common.sh@1509 -- # bdfs=() 00:09:03.659 00:13:18 nvme.nvme_identify -- common/autotest_common.sh@1509 -- # local bdfs 00:09:03.659 00:13:18 nvme.nvme_identify -- common/autotest_common.sh@1510 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:09:03.659 00:13:18 nvme.nvme_identify -- common/autotest_common.sh@1510 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:09:03.659 00:13:18 nvme.nvme_identify -- common/autotest_common.sh@1510 -- # jq -r '.config[].params.traddr' 00:09:03.920 00:13:18 nvme.nvme_identify -- common/autotest_common.sh@1511 -- # (( 4 == 0 )) 00:09:03.920 00:13:18 nvme.nvme_identify -- common/autotest_common.sh@1515 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:09:03.920 00:13:18 nvme.nvme_identify -- nvme/nvme.sh@14 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -i 0 00:09:03.920 [2024-07-23 00:13:18.563418] nvme_ctrlr.c:3486:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:10.0] process 80062 terminated unexpected 00:09:03.920 ===================================================== 00:09:03.920 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:09:03.920 ===================================================== 00:09:03.920 Controller Capabilities/Features 00:09:03.920 ================================ 00:09:03.920 Vendor ID: 1b36 00:09:03.920 Subsystem Vendor ID: 1af4 00:09:03.920 Serial Number: 12340 00:09:03.920 Model Number: QEMU NVMe Ctrl 00:09:03.920 Firmware Version: 8.0.0 00:09:03.920 Recommended Arb Burst: 6 00:09:03.920 IEEE OUI Identifier: 00 54 52 00:09:03.920 Multi-path I/O 00:09:03.920 May have multiple subsystem ports: No 00:09:03.920 May have multiple controllers: No 00:09:03.920 Associated with SR-IOV VF: No 00:09:03.920 Max Data Transfer Size: 524288 00:09:03.920 Max Number of Namespaces: 256 00:09:03.920 Max Number of I/O Queues: 64 00:09:03.920 NVMe Specification Version (VS): 1.4 00:09:03.920 NVMe Specification Version (Identify): 1.4 00:09:03.920 Maximum Queue Entries: 2048 00:09:03.920 Contiguous Queues Required: Yes 00:09:03.920 Arbitration Mechanisms Supported 00:09:03.920 Weighted Round Robin: Not Supported 00:09:03.920 Vendor Specific: Not Supported 00:09:03.920 Reset Timeout: 7500 ms 00:09:03.920 Doorbell Stride: 4 bytes 00:09:03.920 NVM Subsystem Reset: Not Supported 00:09:03.920 Command Sets Supported 00:09:03.920 NVM Command Set: Supported 00:09:03.920 Boot Partition: Not Supported 00:09:03.920 Memory Page Size Minimum: 4096 bytes 00:09:03.920 Memory Page Size Maximum: 65536 bytes 00:09:03.920 Persistent Memory Region: Not Supported 00:09:03.920 Optional Asynchronous Events Supported 00:09:03.920 Namespace Attribute Notices: Supported 00:09:03.920 Firmware Activation Notices: Not Supported 00:09:03.920 ANA Change Notices: Not Supported 00:09:03.920 PLE Aggregate Log Change Notices: Not Supported 00:09:03.920 LBA Status Info Alert Notices: Not Supported 00:09:03.920 EGE Aggregate Log Change Notices: Not Supported 00:09:03.920 Normal NVM Subsystem Shutdown event: Not Supported 00:09:03.920 Zone Descriptor Change Notices: Not Supported 00:09:03.921 Discovery Log Change Notices: Not Supported 00:09:03.921 Controller Attributes 00:09:03.921 128-bit Host Identifier: Not Supported 00:09:03.921 Non-Operational Permissive Mode: Not Supported 00:09:03.921 NVM Sets: Not Supported 00:09:03.921 Read Recovery Levels: Not Supported 00:09:03.921 Endurance Groups: Not Supported 00:09:03.921 Predictable Latency Mode: Not Supported 00:09:03.921 Traffic Based Keep ALive: Not Supported 00:09:03.921 Namespace Granularity: Not Supported 00:09:03.921 SQ Associations: Not Supported 00:09:03.921 UUID List: Not Supported 00:09:03.921 Multi-Domain Subsystem: Not Supported 00:09:03.921 Fixed Capacity Management: Not Supported 00:09:03.921 Variable Capacity Management: Not Supported 00:09:03.921 Delete Endurance Group: Not Supported 00:09:03.921 Delete NVM Set: Not Supported 00:09:03.921 Extended LBA Formats Supported: Supported 00:09:03.921 Flexible Data Placement Supported: Not Supported 00:09:03.921 00:09:03.921 Controller Memory Buffer Support 00:09:03.921 ================================ 00:09:03.921 Supported: No 00:09:03.921 00:09:03.921 Persistent Memory Region Support 00:09:03.921 ================================ 00:09:03.921 Supported: No 00:09:03.921 00:09:03.921 Admin Command Set Attributes 00:09:03.921 ============================ 00:09:03.921 Security Send/Receive: Not Supported 00:09:03.921 Format NVM: Supported 00:09:03.921 Firmware Activate/Download: Not Supported 00:09:03.921 Namespace Management: Supported 00:09:03.921 Device Self-Test: Not Supported 00:09:03.921 Directives: Supported 00:09:03.921 NVMe-MI: Not Supported 00:09:03.921 Virtualization Management: Not Supported 00:09:03.921 Doorbell Buffer Config: Supported 00:09:03.921 Get LBA Status Capability: Not Supported 00:09:03.921 Command & Feature Lockdown Capability: Not Supported 00:09:03.921 Abort Command Limit: 4 00:09:03.921 Async Event Request Limit: 4 00:09:03.921 Number of Firmware Slots: N/A 00:09:03.921 Firmware Slot 1 Read-Only: N/A 00:09:03.921 Firmware Activation Without Reset: N/A 00:09:03.921 Multiple Update Detection Support: N/A 00:09:03.921 Firmware Update Granularity: No Information Provided 00:09:03.921 Per-Namespace SMART Log: Yes 00:09:03.921 Asymmetric Namespace Access Log Page: Not Supported 00:09:03.921 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:09:03.921 Command Effects Log Page: Supported 00:09:03.921 Get Log Page Extended Data: Supported 00:09:03.921 Telemetry Log Pages: Not Supported 00:09:03.921 Persistent Event Log Pages: Not Supported 00:09:03.921 Supported Log Pages Log Page: May Support 00:09:03.921 Commands Supported & Effects Log Page: Not Supported 00:09:03.921 Feature Identifiers & Effects Log Page:May Support 00:09:03.921 NVMe-MI Commands & Effects Log Page: May Support 00:09:03.921 Data Area 4 for Telemetry Log: Not Supported 00:09:03.921 Error Log Page Entries Supported: 1 00:09:03.921 Keep Alive: Not Supported 00:09:03.921 00:09:03.921 NVM Command Set Attributes 00:09:03.921 ========================== 00:09:03.921 Submission Queue Entry Size 00:09:03.921 Max: 64 00:09:03.921 Min: 64 00:09:03.921 Completion Queue Entry Size 00:09:03.921 Max: 16 00:09:03.921 Min: 16 00:09:03.921 Number of Namespaces: 256 00:09:03.921 Compare Command: Supported 00:09:03.921 Write Uncorrectable Command: Not Supported 00:09:03.921 Dataset Management Command: Supported 00:09:03.921 Write Zeroes Command: Supported 00:09:03.921 Set Features Save Field: Supported 00:09:03.921 Reservations: Not Supported 00:09:03.921 Timestamp: Supported 00:09:03.921 Copy: Supported 00:09:03.921 Volatile Write Cache: Present 00:09:03.921 Atomic Write Unit (Normal): 1 00:09:03.921 Atomic Write Unit (PFail): 1 00:09:03.921 Atomic Compare & Write Unit: 1 00:09:03.921 Fused Compare & Write: Not Supported 00:09:03.921 Scatter-Gather List 00:09:03.921 SGL Command Set: Supported 00:09:03.921 SGL Keyed: Not Supported 00:09:03.921 SGL Bit Bucket Descriptor: Not Supported 00:09:03.921 SGL Metadata Pointer: Not Supported 00:09:03.921 Oversized SGL: Not Supported 00:09:03.921 SGL Metadata Address: Not Supported 00:09:03.921 SGL Offset: Not Supported 00:09:03.921 Transport SGL Data Block: Not Supported 00:09:03.921 Replay Protected Memory Block: Not Supported 00:09:03.921 00:09:03.921 Firmware Slot Information 00:09:03.921 ========================= 00:09:03.921 Active slot: 1 00:09:03.921 Slot 1 Firmware Revision: 1.0 00:09:03.921 00:09:03.921 00:09:03.921 Commands Supported and Effects 00:09:03.921 ============================== 00:09:03.921 Admin Commands 00:09:03.921 -------------- 00:09:03.921 Delete I/O Submission Queue (00h): Supported 00:09:03.921 Create I/O Submission Queue (01h): Supported 00:09:03.921 Get Log Page (02h): Supported 00:09:03.921 Delete I/O Completion Queue (04h): Supported 00:09:03.921 Create I/O Completion Queue (05h): Supported 00:09:03.921 Identify (06h): Supported 00:09:03.921 Abort (08h): Supported 00:09:03.921 Set Features (09h): Supported 00:09:03.921 Get Features (0Ah): Supported 00:09:03.921 Asynchronous Event Request (0Ch): Supported 00:09:03.921 Namespace Attachment (15h): Supported NS-Inventory-Change 00:09:03.921 Directive Send (19h): Supported 00:09:03.921 Directive Receive (1Ah): Supported 00:09:03.921 Virtualization Management (1Ch): Supported 00:09:03.921 Doorbell Buffer Config (7Ch): Supported 00:09:03.921 Format NVM (80h): Supported LBA-Change 00:09:03.921 I/O Commands 00:09:03.921 ------------ 00:09:03.921 Flush (00h): Supported LBA-Change 00:09:03.921 Write (01h): Supported LBA-Change 00:09:03.921 Read (02h): Supported 00:09:03.921 Compare (05h): Supported 00:09:03.921 Write Zeroes (08h): Supported LBA-Change 00:09:03.921 Dataset Management (09h): Supported LBA-Change 00:09:03.921 Unknown (0Ch): Supported 00:09:03.921 Unknown (12h): Supported 00:09:03.921 Copy (19h): Supported LBA-Change 00:09:03.921 Unknown (1Dh): Supported LBA-Change 00:09:03.921 00:09:03.921 Error Log 00:09:03.921 ========= 00:09:03.921 00:09:03.921 Arbitration 00:09:03.921 =========== 00:09:03.921 Arbitration Burst: no limit 00:09:03.921 00:09:03.921 Power Management 00:09:03.921 ================ 00:09:03.921 Number of Power States: 1 00:09:03.921 Current Power State: Power State #0 00:09:03.921 Power State #0: 00:09:03.921 Max Power: 25.00 W 00:09:03.921 Non-Operational State: Operational 00:09:03.921 Entry Latency: 16 microseconds 00:09:03.921 Exit Latency: 4 microseconds 00:09:03.921 Relative Read Throughput: 0 00:09:03.921 Relative Read Latency: 0 00:09:03.921 Relative Write Throughput: 0 00:09:03.921 Relative Write Latency: 0 00:09:03.921 Idle Power[2024-07-23 00:13:18.565093] nvme_ctrlr.c:3486:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:11.0] process 80062 terminated unexpected 00:09:03.921 : Not Reported 00:09:03.921 Active Power: Not Reported 00:09:03.921 Non-Operational Permissive Mode: Not Supported 00:09:03.921 00:09:03.921 Health Information 00:09:03.921 ================== 00:09:03.921 Critical Warnings: 00:09:03.921 Available Spare Space: OK 00:09:03.921 Temperature: OK 00:09:03.921 Device Reliability: OK 00:09:03.921 Read Only: No 00:09:03.921 Volatile Memory Backup: OK 00:09:03.921 Current Temperature: 323 Kelvin (50 Celsius) 00:09:03.921 Temperature Threshold: 343 Kelvin (70 Celsius) 00:09:03.921 Available Spare: 0% 00:09:03.921 Available Spare Threshold: 0% 00:09:03.921 Life Percentage Used: 0% 00:09:03.921 Data Units Read: 1199 00:09:03.921 Data Units Written: 1032 00:09:03.921 Host Read Commands: 55646 00:09:03.921 Host Write Commands: 54149 00:09:03.921 Controller Busy Time: 0 minutes 00:09:03.921 Power Cycles: 0 00:09:03.921 Power On Hours: 0 hours 00:09:03.921 Unsafe Shutdowns: 0 00:09:03.921 Unrecoverable Media Errors: 0 00:09:03.921 Lifetime Error Log Entries: 0 00:09:03.921 Warning Temperature Time: 0 minutes 00:09:03.921 Critical Temperature Time: 0 minutes 00:09:03.921 00:09:03.921 Number of Queues 00:09:03.921 ================ 00:09:03.921 Number of I/O Submission Queues: 64 00:09:03.921 Number of I/O Completion Queues: 64 00:09:03.921 00:09:03.921 ZNS Specific Controller Data 00:09:03.921 ============================ 00:09:03.921 Zone Append Size Limit: 0 00:09:03.921 00:09:03.921 00:09:03.921 Active Namespaces 00:09:03.921 ================= 00:09:03.921 Namespace ID:1 00:09:03.921 Error Recovery Timeout: Unlimited 00:09:03.921 Command Set Identifier: NVM (00h) 00:09:03.921 Deallocate: Supported 00:09:03.922 Deallocated/Unwritten Error: Supported 00:09:03.922 Deallocated Read Value: All 0x00 00:09:03.922 Deallocate in Write Zeroes: Not Supported 00:09:03.922 Deallocated Guard Field: 0xFFFF 00:09:03.922 Flush: Supported 00:09:03.922 Reservation: Not Supported 00:09:03.922 Metadata Transferred as: Separate Metadata Buffer 00:09:03.922 Namespace Sharing Capabilities: Private 00:09:03.922 Size (in LBAs): 1548666 (5GiB) 00:09:03.922 Capacity (in LBAs): 1548666 (5GiB) 00:09:03.922 Utilization (in LBAs): 1548666 (5GiB) 00:09:03.922 Thin Provisioning: Not Supported 00:09:03.922 Per-NS Atomic Units: No 00:09:03.922 Maximum Single Source Range Length: 128 00:09:03.922 Maximum Copy Length: 128 00:09:03.922 Maximum Source Range Count: 128 00:09:03.922 NGUID/EUI64 Never Reused: No 00:09:03.922 Namespace Write Protected: No 00:09:03.922 Number of LBA Formats: 8 00:09:03.922 Current LBA Format: LBA Format #07 00:09:03.922 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:03.922 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:03.922 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:03.922 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:03.922 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:03.922 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:03.922 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:03.922 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:03.922 00:09:03.922 ===================================================== 00:09:03.922 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:09:03.922 ===================================================== 00:09:03.922 Controller Capabilities/Features 00:09:03.922 ================================ 00:09:03.922 Vendor ID: 1b36 00:09:03.922 Subsystem Vendor ID: 1af4 00:09:03.922 Serial Number: 12341 00:09:03.922 Model Number: QEMU NVMe Ctrl 00:09:03.922 Firmware Version: 8.0.0 00:09:03.922 Recommended Arb Burst: 6 00:09:03.922 IEEE OUI Identifier: 00 54 52 00:09:03.922 Multi-path I/O 00:09:03.922 May have multiple subsystem ports: No 00:09:03.922 May have multiple controllers: No 00:09:03.922 Associated with SR-IOV VF: No 00:09:03.922 Max Data Transfer Size: 524288 00:09:03.922 Max Number of Namespaces: 256 00:09:03.922 Max Number of I/O Queues: 64 00:09:03.922 NVMe Specification Version (VS): 1.4 00:09:03.922 NVMe Specification Version (Identify): 1.4 00:09:03.922 Maximum Queue Entries: 2048 00:09:03.922 Contiguous Queues Required: Yes 00:09:03.922 Arbitration Mechanisms Supported 00:09:03.922 Weighted Round Robin: Not Supported 00:09:03.922 Vendor Specific: Not Supported 00:09:03.922 Reset Timeout: 7500 ms 00:09:03.922 Doorbell Stride: 4 bytes 00:09:03.922 NVM Subsystem Reset: Not Supported 00:09:03.922 Command Sets Supported 00:09:03.922 NVM Command Set: Supported 00:09:03.922 Boot Partition: Not Supported 00:09:03.922 Memory Page Size Minimum: 4096 bytes 00:09:03.922 Memory Page Size Maximum: 65536 bytes 00:09:03.922 Persistent Memory Region: Not Supported 00:09:03.922 Optional Asynchronous Events Supported 00:09:03.922 Namespace Attribute Notices: Supported 00:09:03.922 Firmware Activation Notices: Not Supported 00:09:03.922 ANA Change Notices: Not Supported 00:09:03.922 PLE Aggregate Log Change Notices: Not Supported 00:09:03.922 LBA Status Info Alert Notices: Not Supported 00:09:03.922 EGE Aggregate Log Change Notices: Not Supported 00:09:03.922 Normal NVM Subsystem Shutdown event: Not Supported 00:09:03.922 Zone Descriptor Change Notices: Not Supported 00:09:03.922 Discovery Log Change Notices: Not Supported 00:09:03.922 Controller Attributes 00:09:03.922 128-bit Host Identifier: Not Supported 00:09:03.922 Non-Operational Permissive Mode: Not Supported 00:09:03.922 NVM Sets: Not Supported 00:09:03.922 Read Recovery Levels: Not Supported 00:09:03.922 Endurance Groups: Not Supported 00:09:03.922 Predictable Latency Mode: Not Supported 00:09:03.922 Traffic Based Keep ALive: Not Supported 00:09:03.922 Namespace Granularity: Not Supported 00:09:03.922 SQ Associations: Not Supported 00:09:03.922 UUID List: Not Supported 00:09:03.922 Multi-Domain Subsystem: Not Supported 00:09:03.922 Fixed Capacity Management: Not Supported 00:09:03.922 Variable Capacity Management: Not Supported 00:09:03.922 Delete Endurance Group: Not Supported 00:09:03.922 Delete NVM Set: Not Supported 00:09:03.922 Extended LBA Formats Supported: Supported 00:09:03.922 Flexible Data Placement Supported: Not Supported 00:09:03.922 00:09:03.922 Controller Memory Buffer Support 00:09:03.922 ================================ 00:09:03.922 Supported: No 00:09:03.922 00:09:03.922 Persistent Memory Region Support 00:09:03.922 ================================ 00:09:03.922 Supported: No 00:09:03.922 00:09:03.922 Admin Command Set Attributes 00:09:03.922 ============================ 00:09:03.922 Security Send/Receive: Not Supported 00:09:03.922 Format NVM: Supported 00:09:03.922 Firmware Activate/Download: Not Supported 00:09:03.922 Namespace Management: Supported 00:09:03.922 Device Self-Test: Not Supported 00:09:03.922 Directives: Supported 00:09:03.922 NVMe-MI: Not Supported 00:09:03.922 Virtualization Management: Not Supported 00:09:03.922 Doorbell Buffer Config: Supported 00:09:03.922 Get LBA Status Capability: Not Supported 00:09:03.922 Command & Feature Lockdown Capability: Not Supported 00:09:03.922 Abort Command Limit: 4 00:09:03.922 Async Event Request Limit: 4 00:09:03.922 Number of Firmware Slots: N/A 00:09:03.922 Firmware Slot 1 Read-Only: N/A 00:09:03.922 Firmware Activation Without Reset: N/A 00:09:03.922 Multiple Update Detection Support: N/A 00:09:03.922 Firmware Update Granularity: No Information Provided 00:09:03.922 Per-Namespace SMART Log: Yes 00:09:03.922 Asymmetric Namespace Access Log Page: Not Supported 00:09:03.922 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:09:03.922 Command Effects Log Page: Supported 00:09:03.922 Get Log Page Extended Data: Supported 00:09:03.922 Telemetry Log Pages: Not Supported 00:09:03.922 Persistent Event Log Pages: Not Supported 00:09:03.922 Supported Log Pages Log Page: May Support 00:09:03.922 Commands Supported & Effects Log Page: Not Supported 00:09:03.922 Feature Identifiers & Effects Log Page:May Support 00:09:03.922 NVMe-MI Commands & Effects Log Page: May Support 00:09:03.922 Data Area 4 for Telemetry Log: Not Supported 00:09:03.922 Error Log Page Entries Supported: 1 00:09:03.922 Keep Alive: Not Supported 00:09:03.922 00:09:03.922 NVM Command Set Attributes 00:09:03.922 ========================== 00:09:03.922 Submission Queue Entry Size 00:09:03.922 Max: 64 00:09:03.922 Min: 64 00:09:03.922 Completion Queue Entry Size 00:09:03.922 Max: 16 00:09:03.922 Min: 16 00:09:03.922 Number of Namespaces: 256 00:09:03.922 Compare Command: Supported 00:09:03.922 Write Uncorrectable Command: Not Supported 00:09:03.922 Dataset Management Command: Supported 00:09:03.922 Write Zeroes Command: Supported 00:09:03.922 Set Features Save Field: Supported 00:09:03.922 Reservations: Not Supported 00:09:03.922 Timestamp: Supported 00:09:03.922 Copy: Supported 00:09:03.922 Volatile Write Cache: Present 00:09:03.922 Atomic Write Unit (Normal): 1 00:09:03.922 Atomic Write Unit (PFail): 1 00:09:03.922 Atomic Compare & Write Unit: 1 00:09:03.922 Fused Compare & Write: Not Supported 00:09:03.922 Scatter-Gather List 00:09:03.922 SGL Command Set: Supported 00:09:03.922 SGL Keyed: Not Supported 00:09:03.922 SGL Bit Bucket Descriptor: Not Supported 00:09:03.922 SGL Metadata Pointer: Not Supported 00:09:03.922 Oversized SGL: Not Supported 00:09:03.922 SGL Metadata Address: Not Supported 00:09:03.922 SGL Offset: Not Supported 00:09:03.922 Transport SGL Data Block: Not Supported 00:09:03.922 Replay Protected Memory Block: Not Supported 00:09:03.922 00:09:03.922 Firmware Slot Information 00:09:03.922 ========================= 00:09:03.922 Active slot: 1 00:09:03.922 Slot 1 Firmware Revision: 1.0 00:09:03.922 00:09:03.922 00:09:03.922 Commands Supported and Effects 00:09:03.922 ============================== 00:09:03.922 Admin Commands 00:09:03.922 -------------- 00:09:03.922 Delete I/O Submission Queue (00h): Supported 00:09:03.922 Create I/O Submission Queue (01h): Supported 00:09:03.922 Get Log Page (02h): Supported 00:09:03.922 Delete I/O Completion Queue (04h): Supported 00:09:03.922 Create I/O Completion Queue (05h): Supported 00:09:03.922 Identify (06h): Supported 00:09:03.922 Abort (08h): Supported 00:09:03.922 Set Features (09h): Supported 00:09:03.922 Get Features (0Ah): Supported 00:09:03.922 Asynchronous Event Request (0Ch): Supported 00:09:03.923 Namespace Attachment (15h): Supported NS-Inventory-Change 00:09:03.923 Directive Send (19h): Supported 00:09:03.923 Directive Receive (1Ah): Supported 00:09:03.923 Virtualization Management (1Ch): Supported 00:09:03.923 Doorbell Buffer Config (7Ch): Supported 00:09:03.923 Format NVM (80h): Supported LBA-Change 00:09:03.923 I/O Commands 00:09:03.923 ------------ 00:09:03.923 Flush (00h): Supported LBA-Change 00:09:03.923 Write (01h): Supported LBA-Change 00:09:03.923 Read (02h): Supported 00:09:03.923 Compare (05h): Supported 00:09:03.923 Write Zeroes (08h): Supported LBA-Change 00:09:03.923 Dataset Management (09h): Supported LBA-Change 00:09:03.923 Unknown (0Ch): Supported 00:09:03.923 Unknown (12h): Supported 00:09:03.923 Copy (19h): Supported LBA-Change 00:09:03.923 Unknown (1Dh): Supported LBA-Change 00:09:03.923 00:09:03.923 Error Log 00:09:03.923 ========= 00:09:03.923 00:09:03.923 Arbitration 00:09:03.923 =========== 00:09:03.923 Arbitration Burst: no limit 00:09:03.923 00:09:03.923 Power Management 00:09:03.923 ================ 00:09:03.923 Number of Power States: 1 00:09:03.923 Current Power State: Power State #0 00:09:03.923 Power State #0: 00:09:03.923 Max Power: 25.00 W 00:09:03.923 Non-Operational State: Operational 00:09:03.923 Entry Latency: 16 microseconds 00:09:03.923 Exit Latency: 4 microseconds 00:09:03.923 Relative Read Throughput: 0 00:09:03.923 Relative Read Latency: 0 00:09:03.923 Relative Write Throughput: 0 00:09:03.923 Relative Write Latency: 0 00:09:03.923 Idle Power: Not Reported 00:09:03.923 Active Power: Not Reported 00:09:03.923 Non-Operational Permissive Mode: Not Supported 00:09:03.923 00:09:03.923 Health Information 00:09:03.923 ================== 00:09:03.923 Critical Warnings: 00:09:03.923 Available Spare Space: OK 00:09:03.923 Temperature: OK 00:09:03.923 Device Reliability: OK 00:09:03.923 Read Only: No 00:09:03.923 Volatile Memory Backup: OK 00:09:03.923 Current Temperature: 323 Kelvin (50 Celsius) 00:09:03.923 Temperature Threshold: 343 Kelvin (70 Celsius) 00:09:03.923 Available Spare: 0% 00:09:03.923 Available Spare Threshold: 0% 00:09:03.923 Life Percentage Used: 0% 00:09:03.923 Data Units Read: 893 00:09:03.923 Data Units Written: 746 00:09:03.923 Host Read Commands: 39738 00:09:03.923 Host Write Commands: 37556 00:09:03.923 Controller Busy Time: 0 minutes 00:09:03.923 Power Cycles: 0 00:09:03.923 Power On Hours: 0 hours 00:09:03.923 Unsafe Shutdowns: 0 00:09:03.923 Unrecoverable Media Errors: 0 00:09:03.923 Lifetime Error Log Entries: 0 00:09:03.923 Warning Temperature Time: 0 minutes 00:09:03.923 Critical Temperature Time: 0 minutes 00:09:03.923 00:09:03.923 Number of Queues 00:09:03.923 ================ 00:09:03.923 Number of I/O Submission Queues: 64 00:09:03.923 Number of I/O Completion Queues: 64 00:09:03.923 00:09:03.923 ZNS Specific Controller Data 00:09:03.923 ============================ 00:09:03.923 Zone Append Size Limit: 0 00:09:03.923 00:09:03.923 00:09:03.923 Active Namespaces 00:09:03.923 ================= 00:09:03.923 Namespace ID:1 00:09:03.923 Error Recovery Timeout: Unlimited 00:09:03.923 Command Set Identifier: NVM (00h) 00:09:03.923 Deallocate: Supported 00:09:03.923 Deallocated/Unwritten Error: Supported 00:09:03.923 Deallocated Read Value: All 0x00 00:09:03.923 Deallocate in Write Zeroes: Not Supported 00:09:03.923 Deallocated Guard Field: 0xFFFF 00:09:03.923 Flush: Supported 00:09:03.923 Reservation: Not Supported 00:09:03.923 Namespace Sharing Capabilities: Private 00:09:03.923 Size (in LBAs): 1310720 (5GiB) 00:09:03.923 Capacity (in LBAs): 1310720 (5GiB) 00:09:03.923 Utilization (in LBAs): 1310720 (5GiB) 00:09:03.923 Thin Provisioning: Not Supported 00:09:03.923 Per-NS Atomic Units: No 00:09:03.923 Maximum Single Source Range Length: 128 00:09:03.923 Maximum Copy Length: 128 00:09:03.923 Maximum Source Range Count: 128 00:09:03.923 NGUID/EUI64 Never Reused: No 00:09:03.923 Namespace Write Protected: No 00:09:03.923 Number of LBA Formats: 8 00:09:03.923 Current LBA Format: LBA Format #04 00:09:03.923 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:03.923 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:03.923 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:03.923 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:03.923 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:03.923 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:03.923 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:03.923 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:03.923 00:09:03.923 ===================================================== 00:09:03.923 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:09:03.923 ===================================================== 00:09:03.923 Controller Capabilities/Features 00:09:03.923 ================================ 00:09:03.923 Vendor ID: 1b36 00:09:03.923 Subsystem Vendor ID: 1af4 00:09:03.923 Serial Number: 12343 00:09:03.923 Model Number: QEMU NVMe Ctrl 00:09:03.923 Firmware Version: 8.0.0 00:09:03.923 Recommended Arb Burst: 6 00:09:03.923 IEEE OUI Identifier: 00 54 52 00:09:03.923 Multi-path I/O 00:09:03.923 May have multiple subsystem ports: No 00:09:03.923 May have multiple controllers: Yes 00:09:03.923 Associated with SR-IOV VF: No 00:09:03.923 Max Data Transfer Size: 524288 00:09:03.923 Max Number of Namespaces: 256 00:09:03.923 Max Number of I/O Queues: 64 00:09:03.923 NVMe Specification Version (VS): 1.4 00:09:03.923 NVMe Specification Version (Identify): 1.4 00:09:03.923 Maximum Queue Entries: 2048 00:09:03.923 Contiguous Queues Required: Yes 00:09:03.923 Arbitration Mechanisms Supported 00:09:03.923 Weighted Round Robin: Not Supported 00:09:03.923 Vendor Specific: Not Supported 00:09:03.923 Reset Timeout: 7500 ms 00:09:03.923 Doorbell Stride: 4 bytes 00:09:03.923 NVM Subsystem Reset: Not Supported 00:09:03.923 Command Sets Supported 00:09:03.923 NVM Command Set: Supported 00:09:03.923 Boot Partition: Not Supported 00:09:03.923 Memory Page Size Minimum: 4096 bytes 00:09:03.923 Memory Page Size Maximum: 65536 bytes 00:09:03.923 Persistent Memory Region: Not Supported 00:09:03.923 Optional Asynchronous Events Supported 00:09:03.923 Namespace Attribute Notices: Supported 00:09:03.923 Firmware Activation Notices: Not Supported 00:09:03.923 ANA Change Notices: Not Supported 00:09:03.923 PLE Aggregate Log Change Notices: Not Supported 00:09:03.923 LBA Status Info Alert Notices: Not Supported 00:09:03.923 EGE Aggregate Log Change Notices: Not Supported 00:09:03.923 Normal NVM Subsystem Shutdown event: Not Supported 00:09:03.923 Zone Descriptor Change Notices: Not Supported 00:09:03.923 Discovery Log Change Notices: Not Supported 00:09:03.923 Controller Attributes 00:09:03.923 128-bit Host Identifier: Not Supported 00:09:03.923 Non-Operational Permissive Mode: Not Supported 00:09:03.923 NVM Sets: Not Supported 00:09:03.923 Read Recovery Levels: Not Supported 00:09:03.923 Endurance Groups: Supported 00:09:03.923 Predictable Latency Mode: Not Supported 00:09:03.923 Traffic Based Keep ALive: Not Supported 00:09:03.923 Namespace Granularity: Not Supported 00:09:03.923 SQ Associations: Not Supported 00:09:03.923 UUID List: Not Supported 00:09:03.923 Multi-Domain Subsystem: Not Supported 00:09:03.923 Fixed Capacity Management: Not Supported 00:09:03.923 Variable Capacity Management: Not Supported 00:09:03.923 Delete Endurance Group: Not Supported 00:09:03.923 Delete NVM Set: Not Supported 00:09:03.923 Extended LBA Formats Supported: Supported 00:09:03.923 Flexible Data Placement Supported: Supported 00:09:03.923 00:09:03.923 Controller Memory Buffer Support 00:09:03.923 ================================ 00:09:03.923 Supported: No 00:09:03.923 00:09:03.923 Persistent Memory Region Support 00:09:03.923 ================================ 00:09:03.923 Supported: No 00:09:03.923 00:09:03.923 Admin Command Set Attributes 00:09:03.923 ============================ 00:09:03.923 Security Send/Receive: Not Supported 00:09:03.923 Format NVM: Supported 00:09:03.923 Firmware Activate/Download: Not Supported 00:09:03.923 Namespace Management: Supported 00:09:03.923 Device Self-Test: Not Supported 00:09:03.923 Directives: Supported 00:09:03.923 NVMe-MI: Not Supported 00:09:03.923 Virtualization Management: Not Supported 00:09:03.924 Doorbell Buffer Config: Supported 00:09:03.924 Get LBA Status Capability: Not Supported 00:09:03.924 Command & Feature Lockdown Capability: Not Supported 00:09:03.924 Abort Command Limit: 4 00:09:03.924 Async Event Request Limit: 4 00:09:03.924 Number of Firmware Slots: N/A 00:09:03.924 Firmware Slot 1 Read-Only: N/A 00:09:03.924 Firmware Activation Without Reset: N/A 00:09:03.924 Multiple Update Detection Support: N/A 00:09:03.924 Firmware Update Granularity: No Information Provided 00:09:03.924 Per-Namespace SMART Log: Yes 00:09:03.924 Asymmetric Namespace Access Log Page: Not Supported 00:09:03.924 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:09:03.924 Command Effects Log Page: Supported 00:09:03.924 Get Log Page Extended Data: Supported 00:09:03.924 Telemetry Log Pages: Not Supported 00:09:03.924 Persistent Event Log Pages: Not Supported 00:09:03.924 Supported Log Pages Log Page: May Support 00:09:03.924 Commands Supported & Effects Log Page: Not Supported 00:09:03.924 Feature Identifiers & Effects Log Page:May Support 00:09:03.924 NVMe-MI Commands & Effects Log Page: May Support 00:09:03.924 Data Area 4 for Telemetry Log: Not Supported 00:09:03.924 Error Log Page Entries Supported: 1 00:09:03.924 Keep Alive: Not Supported 00:09:03.924 00:09:03.924 NVM Command Set Attributes 00:09:03.924 ========================== 00:09:03.924 Submission Queue Entry Size 00:09:03.924 Max: 64 00:09:03.924 Min: 64 00:09:03.924 Completion Queue Entry Size 00:09:03.924 Max: 16 00:09:03.924 Min: 16 00:09:03.924 Number of Namespaces: 256 00:09:03.924 Compare Command: Supported 00:09:03.924 Write Uncorrectable Command: Not Supported 00:09:03.924 Dataset Management Command: Supported 00:09:03.924 Write Zeroes Command: Supported 00:09:03.924 Set Features Save Field: Supported 00:09:03.924 Reservations: Not Supported 00:09:03.924 Timestamp: Supported 00:09:03.924 Copy: Supported 00:09:03.924 Volatile Write Cache: Present 00:09:03.924 Atomic Write Unit (Normal): 1 00:09:03.924 Atomic Write Unit (PFail): 1 00:09:03.924 Atomic Compare & Write Unit: 1 00:09:03.924 Fused Compare & Write: Not Supported 00:09:03.924 Scatter-Gather List 00:09:03.924 SGL Command Set: Supported 00:09:03.924 SGL Keyed: Not Supported 00:09:03.924 SGL Bit Bucket Descriptor: Not Supported 00:09:03.924 SGL Metadata Pointer: Not Supported 00:09:03.924 Oversized SGL: Not Supported 00:09:03.924 SGL Metadata Address: Not Supported 00:09:03.924 SGL Offset: Not Supported 00:09:03.924 Transport SGL Data Block: Not Supported 00:09:03.924 Replay Protected Memory Block: Not Supported 00:09:03.924 00:09:03.924 Firmware Slot Information 00:09:03.924 ========================= 00:09:03.924 Active slot: 1 00:09:03.924 Slot 1 Firmware Revision: 1.0 00:09:03.924 00:09:03.924 00:09:03.924 Commands Supported and Effects 00:09:03.924 ============================== 00:09:03.924 Admin Commands 00:09:03.924 -------------- 00:09:03.924 Delete I/O Submission Queue (00h): Supported 00:09:03.924 Create I/O Submission Queue (01h): Supported 00:09:03.924 Get Log Page (02h): Supported 00:09:03.924 Delete I/O Completion Queue (04h): Supported 00:09:03.924 Create I/O Completion Queue (05h): Supported 00:09:03.924 Identify (06h): Supported 00:09:03.924 Abort (08h): Supported 00:09:03.924 Set Features (09h): Supported 00:09:03.924 Get Features (0Ah): Supported 00:09:03.924 Asynchronous Event Request (0Ch): Supported 00:09:03.924 Namespace Attachment (15h): Supported NS-Inventory-Change 00:09:03.924 Directive Send (19h): Supported 00:09:03.924 Directive Receive (1Ah): Supported 00:09:03.924 Virtualization Management (1Ch): Supported 00:09:03.924 Doorbell Buffer Config (7Ch): Supported 00:09:03.924 Format NVM (80h): Supported LBA-Change 00:09:03.924 I/O Commands 00:09:03.924 ------------ 00:09:03.924 Flush (00h): Supported LBA-Change 00:09:03.924 Write (01h): Supported LBA-Change 00:09:03.924 Read (02h): Supported 00:09:03.924 Compare (05h): Supported 00:09:03.924 Write Zeroes (08h): Supported LBA-Change 00:09:03.924 Dataset Management (09h): Supported LBA-Change 00:09:03.924 Unknown (0Ch): Supported 00:09:03.924 Unknown (12h): Supported 00:09:03.924 Copy (19h): Supported LBA-Change 00:09:03.924 Unknown (1Dh): Supported LBA-Change 00:09:03.924 00:09:03.924 Error Log 00:09:03.924 ========= 00:09:03.924 00:09:03.924 Arbitration 00:09:03.924 =========== 00:09:03.924 Arbitration Burst: no limit 00:09:03.924 00:09:03.924 Power Management 00:09:03.924 ================ 00:09:03.924 Number of Power States: 1 00:09:03.924 Current Power State: Power State #0 00:09:03.924 Power State #0: 00:09:03.924 Max Power: 25.00 W 00:09:03.924 Non-Operational State: Operational 00:09:03.924 Entry Latency: 16 microseconds 00:09:03.924 Exit Latency: 4 microseconds 00:09:03.924 Relative Read Throughput: 0 00:09:03.924 Relative Read Latency: 0 00:09:03.924 Relative Write Throughput: 0 00:09:03.924 Relative Write Latency: 0 00:09:03.924 Idle Power: Not Reported 00:09:03.924 Active Power: Not Reported 00:09:03.924 Non-Operational Permissive Mode: Not Supported 00:09:03.924 00:09:03.924 Health Information 00:09:03.924 ================== 00:09:03.924 Critical Warnings: 00:09:03.924 Available Spare Space: OK 00:09:03.924 Temperature: OK 00:09:03.924 Device Reliability: OK 00:09:03.924 Read Only: No 00:09:03.924 Volatile Memory Backup: OK 00:09:03.924 Current Temperature: 323 Kelvin (50 Celsius) 00:09:03.924 Temperature Threshold: 343 Kelvin (70 Celsius) 00:09:03.924 Available Spare: 0% 00:09:03.924 Available Spare Threshold: 0% 00:09:03.924 Life Percentage Used: 0% 00:09:03.924 Data Units Read: 931 00:09:03.924 Data Units Written: 825 00:09:03.924 Host Read Commands: 39588 00:09:03.924 Host Write Commands: 38178 00:09:03.924 Controller Busy Time: 0 minutes 00:09:03.924 Power Cycles: 0 00:09:03.924 Power On Hours: 0 hours 00:09:03.924 Unsafe Shutdowns: 0 00:09:03.924 Unrecoverable Media Errors: 0 00:09:03.924 Lifetime Error Log Entries: 0 00:09:03.924 Warning Temperature Time: 0 minutes 00:09:03.924 Critical Temperature Time: 0 minutes 00:09:03.924 00:09:03.924 Number of Queues 00:09:03.924 ================ 00:09:03.924 Number of I/O Submission Queues: 64 00:09:03.924 Number of I/O Completion Queues: 64 00:09:03.924 00:09:03.924 ZNS Specific Controller Data 00:09:03.924 ============================ 00:09:03.924 Zone Append Size Limit: 0 00:09:03.924 00:09:03.924 00:09:03.924 Active Namespaces 00:09:03.924 ================= 00:09:03.924 Namespace ID:1 00:09:03.924 Error Recovery Timeout: Unlimited 00:09:03.924 Command Set Identifier: NVM (00h) 00:09:03.924 Deallocate: Supported 00:09:03.924 Deallocated/Unwritten Error: Supported 00:09:03.924 Deallocated Read Value: All 0x00 00:09:03.924 Deallocate in Write Zeroes: Not Supported 00:09:03.924 Deallocated Guard Field: 0xFFFF 00:09:03.924 Flush: Supported 00:09:03.924 Reservation: Not Supported 00:09:03.924 Namespace Sharing Capabilities: Multiple Controllers 00:09:03.924 Size (in LBAs): 262144 (1GiB) 00:09:03.924 Capacity (in LBAs): 262144 (1GiB) 00:09:03.924 Utilization (in LBAs): 262144 (1GiB) 00:09:03.924 Thin Provisioning: Not Supported 00:09:03.924 Per-NS Atomic Units: No 00:09:03.924 Maximum Single Source Range Length: 128 00:09:03.924 Maximum Copy Length: 128 00:09:03.924 Maximum Source Range Count: 128 00:09:03.924 NGUID/EUI64 Never Reused: No 00:09:03.924 Namespace Write Protected: No 00:09:03.924 Endurance group ID: 1 00:09:03.924 Number of LBA Formats: 8 00:09:03.924 Current LBA Format: LBA Format #04 00:09:03.924 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:03.924 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:03.924 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:03.924 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:03.924 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:03.924 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:03.924 LBA Format #06: Data Siz[2024-07-23 00:13:18.566293] nvme_ctrlr.c:3486:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:13.0] process 80062 terminated unexpected 00:09:03.924 [2024-07-23 00:13:18.567865] nvme_ctrlr.c:3486:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:12.0] process 80062 terminated unexpected 00:09:03.924 e: 4096 Metadata Size: 16 00:09:03.924 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:03.924 00:09:03.924 Get Feature FDP: 00:09:03.924 ================ 00:09:03.924 Enabled: Yes 00:09:03.924 FDP configuration index: 0 00:09:03.924 00:09:03.924 FDP configurations log page 00:09:03.924 =========================== 00:09:03.925 Number of FDP configurations: 1 00:09:03.925 Version: 0 00:09:03.925 Size: 112 00:09:03.925 FDP Configuration Descriptor: 0 00:09:03.925 Descriptor Size: 96 00:09:03.925 Reclaim Group Identifier format: 2 00:09:03.925 FDP Volatile Write Cache: Not Present 00:09:03.925 FDP Configuration: Valid 00:09:03.925 Vendor Specific Size: 0 00:09:03.925 Number of Reclaim Groups: 2 00:09:03.925 Number of Recalim Unit Handles: 8 00:09:03.925 Max Placement Identifiers: 128 00:09:03.925 Number of Namespaces Suppprted: 256 00:09:03.925 Reclaim unit Nominal Size: 6000000 bytes 00:09:03.925 Estimated Reclaim Unit Time Limit: Not Reported 00:09:03.925 RUH Desc #000: RUH Type: Initially Isolated 00:09:03.925 RUH Desc #001: RUH Type: Initially Isolated 00:09:03.925 RUH Desc #002: RUH Type: Initially Isolated 00:09:03.925 RUH Desc #003: RUH Type: Initially Isolated 00:09:03.925 RUH Desc #004: RUH Type: Initially Isolated 00:09:03.925 RUH Desc #005: RUH Type: Initially Isolated 00:09:03.925 RUH Desc #006: RUH Type: Initially Isolated 00:09:03.925 RUH Desc #007: RUH Type: Initially Isolated 00:09:03.925 00:09:03.925 FDP reclaim unit handle usage log page 00:09:03.925 ====================================== 00:09:03.925 Number of Reclaim Unit Handles: 8 00:09:03.925 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:09:03.925 RUH Usage Desc #001: RUH Attributes: Unused 00:09:03.925 RUH Usage Desc #002: RUH Attributes: Unused 00:09:03.925 RUH Usage Desc #003: RUH Attributes: Unused 00:09:03.925 RUH Usage Desc #004: RUH Attributes: Unused 00:09:03.925 RUH Usage Desc #005: RUH Attributes: Unused 00:09:03.925 RUH Usage Desc #006: RUH Attributes: Unused 00:09:03.925 RUH Usage Desc #007: RUH Attributes: Unused 00:09:03.925 00:09:03.925 FDP statistics log page 00:09:03.925 ======================= 00:09:03.925 Host bytes with metadata written: 532127744 00:09:03.925 Media bytes with metadata written: 532185088 00:09:03.925 Media bytes erased: 0 00:09:03.925 00:09:03.925 FDP events log page 00:09:03.925 =================== 00:09:03.925 Number of FDP events: 0 00:09:03.925 00:09:03.925 ===================================================== 00:09:03.925 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:09:03.925 ===================================================== 00:09:03.925 Controller Capabilities/Features 00:09:03.925 ================================ 00:09:03.925 Vendor ID: 1b36 00:09:03.925 Subsystem Vendor ID: 1af4 00:09:03.925 Serial Number: 12342 00:09:03.925 Model Number: QEMU NVMe Ctrl 00:09:03.925 Firmware Version: 8.0.0 00:09:03.925 Recommended Arb Burst: 6 00:09:03.925 IEEE OUI Identifier: 00 54 52 00:09:03.925 Multi-path I/O 00:09:03.925 May have multiple subsystem ports: No 00:09:03.925 May have multiple controllers: No 00:09:03.925 Associated with SR-IOV VF: No 00:09:03.925 Max Data Transfer Size: 524288 00:09:03.925 Max Number of Namespaces: 256 00:09:03.925 Max Number of I/O Queues: 64 00:09:03.925 NVMe Specification Version (VS): 1.4 00:09:03.925 NVMe Specification Version (Identify): 1.4 00:09:03.925 Maximum Queue Entries: 2048 00:09:03.925 Contiguous Queues Required: Yes 00:09:03.925 Arbitration Mechanisms Supported 00:09:03.925 Weighted Round Robin: Not Supported 00:09:03.925 Vendor Specific: Not Supported 00:09:03.925 Reset Timeout: 7500 ms 00:09:03.925 Doorbell Stride: 4 bytes 00:09:03.925 NVM Subsystem Reset: Not Supported 00:09:03.925 Command Sets Supported 00:09:03.925 NVM Command Set: Supported 00:09:03.925 Boot Partition: Not Supported 00:09:03.925 Memory Page Size Minimum: 4096 bytes 00:09:03.925 Memory Page Size Maximum: 65536 bytes 00:09:03.925 Persistent Memory Region: Not Supported 00:09:03.925 Optional Asynchronous Events Supported 00:09:03.925 Namespace Attribute Notices: Supported 00:09:03.925 Firmware Activation Notices: Not Supported 00:09:03.925 ANA Change Notices: Not Supported 00:09:03.925 PLE Aggregate Log Change Notices: Not Supported 00:09:03.925 LBA Status Info Alert Notices: Not Supported 00:09:03.925 EGE Aggregate Log Change Notices: Not Supported 00:09:03.925 Normal NVM Subsystem Shutdown event: Not Supported 00:09:03.925 Zone Descriptor Change Notices: Not Supported 00:09:03.925 Discovery Log Change Notices: Not Supported 00:09:03.925 Controller Attributes 00:09:03.925 128-bit Host Identifier: Not Supported 00:09:03.925 Non-Operational Permissive Mode: Not Supported 00:09:03.925 NVM Sets: Not Supported 00:09:03.925 Read Recovery Levels: Not Supported 00:09:03.925 Endurance Groups: Not Supported 00:09:03.925 Predictable Latency Mode: Not Supported 00:09:03.925 Traffic Based Keep ALive: Not Supported 00:09:03.925 Namespace Granularity: Not Supported 00:09:03.925 SQ Associations: Not Supported 00:09:03.925 UUID List: Not Supported 00:09:03.925 Multi-Domain Subsystem: Not Supported 00:09:03.925 Fixed Capacity Management: Not Supported 00:09:03.925 Variable Capacity Management: Not Supported 00:09:03.925 Delete Endurance Group: Not Supported 00:09:03.925 Delete NVM Set: Not Supported 00:09:03.925 Extended LBA Formats Supported: Supported 00:09:03.925 Flexible Data Placement Supported: Not Supported 00:09:03.925 00:09:03.925 Controller Memory Buffer Support 00:09:03.925 ================================ 00:09:03.925 Supported: No 00:09:03.925 00:09:03.925 Persistent Memory Region Support 00:09:03.925 ================================ 00:09:03.925 Supported: No 00:09:03.925 00:09:03.925 Admin Command Set Attributes 00:09:03.925 ============================ 00:09:03.925 Security Send/Receive: Not Supported 00:09:03.925 Format NVM: Supported 00:09:03.925 Firmware Activate/Download: Not Supported 00:09:03.925 Namespace Management: Supported 00:09:03.925 Device Self-Test: Not Supported 00:09:03.925 Directives: Supported 00:09:03.925 NVMe-MI: Not Supported 00:09:03.925 Virtualization Management: Not Supported 00:09:03.925 Doorbell Buffer Config: Supported 00:09:03.925 Get LBA Status Capability: Not Supported 00:09:03.925 Command & Feature Lockdown Capability: Not Supported 00:09:03.925 Abort Command Limit: 4 00:09:03.925 Async Event Request Limit: 4 00:09:03.925 Number of Firmware Slots: N/A 00:09:03.925 Firmware Slot 1 Read-Only: N/A 00:09:03.925 Firmware Activation Without Reset: N/A 00:09:03.925 Multiple Update Detection Support: N/A 00:09:03.925 Firmware Update Granularity: No Information Provided 00:09:03.925 Per-Namespace SMART Log: Yes 00:09:03.925 Asymmetric Namespace Access Log Page: Not Supported 00:09:03.925 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:09:03.925 Command Effects Log Page: Supported 00:09:03.925 Get Log Page Extended Data: Supported 00:09:03.925 Telemetry Log Pages: Not Supported 00:09:03.925 Persistent Event Log Pages: Not Supported 00:09:03.925 Supported Log Pages Log Page: May Support 00:09:03.926 Commands Supported & Effects Log Page: Not Supported 00:09:03.926 Feature Identifiers & Effects Log Page:May Support 00:09:03.926 NVMe-MI Commands & Effects Log Page: May Support 00:09:03.926 Data Area 4 for Telemetry Log: Not Supported 00:09:03.926 Error Log Page Entries Supported: 1 00:09:03.926 Keep Alive: Not Supported 00:09:03.926 00:09:03.926 NVM Command Set Attributes 00:09:03.926 ========================== 00:09:03.926 Submission Queue Entry Size 00:09:03.926 Max: 64 00:09:03.926 Min: 64 00:09:03.926 Completion Queue Entry Size 00:09:03.926 Max: 16 00:09:03.926 Min: 16 00:09:03.926 Number of Namespaces: 256 00:09:03.926 Compare Command: Supported 00:09:03.926 Write Uncorrectable Command: Not Supported 00:09:03.926 Dataset Management Command: Supported 00:09:03.926 Write Zeroes Command: Supported 00:09:03.926 Set Features Save Field: Supported 00:09:03.926 Reservations: Not Supported 00:09:03.926 Timestamp: Supported 00:09:03.926 Copy: Supported 00:09:03.926 Volatile Write Cache: Present 00:09:03.926 Atomic Write Unit (Normal): 1 00:09:03.926 Atomic Write Unit (PFail): 1 00:09:03.926 Atomic Compare & Write Unit: 1 00:09:03.926 Fused Compare & Write: Not Supported 00:09:03.926 Scatter-Gather List 00:09:03.926 SGL Command Set: Supported 00:09:03.926 SGL Keyed: Not Supported 00:09:03.926 SGL Bit Bucket Descriptor: Not Supported 00:09:03.926 SGL Metadata Pointer: Not Supported 00:09:03.926 Oversized SGL: Not Supported 00:09:03.926 SGL Metadata Address: Not Supported 00:09:03.926 SGL Offset: Not Supported 00:09:03.926 Transport SGL Data Block: Not Supported 00:09:03.926 Replay Protected Memory Block: Not Supported 00:09:03.926 00:09:03.926 Firmware Slot Information 00:09:03.926 ========================= 00:09:03.926 Active slot: 1 00:09:03.926 Slot 1 Firmware Revision: 1.0 00:09:03.926 00:09:03.926 00:09:03.926 Commands Supported and Effects 00:09:03.926 ============================== 00:09:03.926 Admin Commands 00:09:03.926 -------------- 00:09:03.926 Delete I/O Submission Queue (00h): Supported 00:09:03.926 Create I/O Submission Queue (01h): Supported 00:09:03.926 Get Log Page (02h): Supported 00:09:03.926 Delete I/O Completion Queue (04h): Supported 00:09:03.926 Create I/O Completion Queue (05h): Supported 00:09:03.926 Identify (06h): Supported 00:09:03.926 Abort (08h): Supported 00:09:03.926 Set Features (09h): Supported 00:09:03.926 Get Features (0Ah): Supported 00:09:03.926 Asynchronous Event Request (0Ch): Supported 00:09:03.926 Namespace Attachment (15h): Supported NS-Inventory-Change 00:09:03.926 Directive Send (19h): Supported 00:09:03.926 Directive Receive (1Ah): Supported 00:09:03.926 Virtualization Management (1Ch): Supported 00:09:03.926 Doorbell Buffer Config (7Ch): Supported 00:09:03.926 Format NVM (80h): Supported LBA-Change 00:09:03.926 I/O Commands 00:09:03.926 ------------ 00:09:03.926 Flush (00h): Supported LBA-Change 00:09:03.926 Write (01h): Supported LBA-Change 00:09:03.926 Read (02h): Supported 00:09:03.926 Compare (05h): Supported 00:09:03.926 Write Zeroes (08h): Supported LBA-Change 00:09:03.926 Dataset Management (09h): Supported LBA-Change 00:09:03.926 Unknown (0Ch): Supported 00:09:03.926 Unknown (12h): Supported 00:09:03.926 Copy (19h): Supported LBA-Change 00:09:03.926 Unknown (1Dh): Supported LBA-Change 00:09:03.926 00:09:03.926 Error Log 00:09:03.926 ========= 00:09:03.926 00:09:03.926 Arbitration 00:09:03.926 =========== 00:09:03.926 Arbitration Burst: no limit 00:09:03.926 00:09:03.926 Power Management 00:09:03.926 ================ 00:09:03.926 Number of Power States: 1 00:09:03.926 Current Power State: Power State #0 00:09:03.926 Power State #0: 00:09:03.926 Max Power: 25.00 W 00:09:03.926 Non-Operational State: Operational 00:09:03.926 Entry Latency: 16 microseconds 00:09:03.926 Exit Latency: 4 microseconds 00:09:03.926 Relative Read Throughput: 0 00:09:03.926 Relative Read Latency: 0 00:09:03.926 Relative Write Throughput: 0 00:09:03.926 Relative Write Latency: 0 00:09:03.926 Idle Power: Not Reported 00:09:03.926 Active Power: Not Reported 00:09:03.926 Non-Operational Permissive Mode: Not Supported 00:09:03.926 00:09:03.926 Health Information 00:09:03.926 ================== 00:09:03.926 Critical Warnings: 00:09:03.926 Available Spare Space: OK 00:09:03.926 Temperature: OK 00:09:03.926 Device Reliability: OK 00:09:03.926 Read Only: No 00:09:03.926 Volatile Memory Backup: OK 00:09:03.926 Current Temperature: 323 Kelvin (50 Celsius) 00:09:03.926 Temperature Threshold: 343 Kelvin (70 Celsius) 00:09:03.926 Available Spare: 0% 00:09:03.926 Available Spare Threshold: 0% 00:09:03.926 Life Percentage Used: 0% 00:09:03.926 Data Units Read: 2612 00:09:03.926 Data Units Written: 2292 00:09:03.926 Host Read Commands: 117231 00:09:03.926 Host Write Commands: 113001 00:09:03.926 Controller Busy Time: 0 minutes 00:09:03.926 Power Cycles: 0 00:09:03.926 Power On Hours: 0 hours 00:09:03.926 Unsafe Shutdowns: 0 00:09:03.926 Unrecoverable Media Errors: 0 00:09:03.926 Lifetime Error Log Entries: 0 00:09:03.926 Warning Temperature Time: 0 minutes 00:09:03.926 Critical Temperature Time: 0 minutes 00:09:03.926 00:09:03.926 Number of Queues 00:09:03.926 ================ 00:09:03.926 Number of I/O Submission Queues: 64 00:09:03.926 Number of I/O Completion Queues: 64 00:09:03.926 00:09:03.926 ZNS Specific Controller Data 00:09:03.926 ============================ 00:09:03.926 Zone Append Size Limit: 0 00:09:03.926 00:09:03.926 00:09:03.926 Active Namespaces 00:09:03.926 ================= 00:09:03.926 Namespace ID:1 00:09:03.926 Error Recovery Timeout: Unlimited 00:09:03.926 Command Set Identifier: NVM (00h) 00:09:03.926 Deallocate: Supported 00:09:03.926 Deallocated/Unwritten Error: Supported 00:09:03.926 Deallocated Read Value: All 0x00 00:09:03.926 Deallocate in Write Zeroes: Not Supported 00:09:03.926 Deallocated Guard Field: 0xFFFF 00:09:03.926 Flush: Supported 00:09:03.926 Reservation: Not Supported 00:09:03.926 Namespace Sharing Capabilities: Private 00:09:03.926 Size (in LBAs): 1048576 (4GiB) 00:09:03.926 Capacity (in LBAs): 1048576 (4GiB) 00:09:03.926 Utilization (in LBAs): 1048576 (4GiB) 00:09:03.926 Thin Provisioning: Not Supported 00:09:03.926 Per-NS Atomic Units: No 00:09:03.926 Maximum Single Source Range Length: 128 00:09:03.926 Maximum Copy Length: 128 00:09:03.926 Maximum Source Range Count: 128 00:09:03.926 NGUID/EUI64 Never Reused: No 00:09:03.926 Namespace Write Protected: No 00:09:03.926 Number of LBA Formats: 8 00:09:03.926 Current LBA Format: LBA Format #04 00:09:03.926 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:03.926 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:03.926 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:03.926 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:03.926 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:03.926 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:03.926 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:03.926 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:03.926 00:09:03.926 Namespace ID:2 00:09:03.926 Error Recovery Timeout: Unlimited 00:09:03.926 Command Set Identifier: NVM (00h) 00:09:03.926 Deallocate: Supported 00:09:03.926 Deallocated/Unwritten Error: Supported 00:09:03.926 Deallocated Read Value: All 0x00 00:09:03.926 Deallocate in Write Zeroes: Not Supported 00:09:03.926 Deallocated Guard Field: 0xFFFF 00:09:03.926 Flush: Supported 00:09:03.926 Reservation: Not Supported 00:09:03.926 Namespace Sharing Capabilities: Private 00:09:03.926 Size (in LBAs): 1048576 (4GiB) 00:09:03.926 Capacity (in LBAs): 1048576 (4GiB) 00:09:03.926 Utilization (in LBAs): 1048576 (4GiB) 00:09:03.926 Thin Provisioning: Not Supported 00:09:03.926 Per-NS Atomic Units: No 00:09:03.926 Maximum Single Source Range Length: 128 00:09:03.926 Maximum Copy Length: 128 00:09:03.926 Maximum Source Range Count: 128 00:09:03.926 NGUID/EUI64 Never Reused: No 00:09:03.926 Namespace Write Protected: No 00:09:03.926 Number of LBA Formats: 8 00:09:03.926 Current LBA Format: LBA Format #04 00:09:03.926 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:03.926 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:03.926 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:03.926 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:03.926 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:03.927 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:03.927 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:03.927 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:03.927 00:09:03.927 Namespace ID:3 00:09:03.927 Error Recovery Timeout: Unlimited 00:09:03.927 Command Set Identifier: NVM (00h) 00:09:03.927 Deallocate: Supported 00:09:03.927 Deallocated/Unwritten Error: Supported 00:09:03.927 Deallocated Read Value: All 0x00 00:09:03.927 Deallocate in Write Zeroes: Not Supported 00:09:03.927 Deallocated Guard Field: 0xFFFF 00:09:03.927 Flush: Supported 00:09:03.927 Reservation: Not Supported 00:09:03.927 Namespace Sharing Capabilities: Private 00:09:03.927 Size (in LBAs): 1048576 (4GiB) 00:09:03.927 Capacity (in LBAs): 1048576 (4GiB) 00:09:03.927 Utilization (in LBAs): 1048576 (4GiB) 00:09:03.927 Thin Provisioning: Not Supported 00:09:03.927 Per-NS Atomic Units: No 00:09:03.927 Maximum Single Source Range Length: 128 00:09:03.927 Maximum Copy Length: 128 00:09:03.927 Maximum Source Range Count: 128 00:09:03.927 NGUID/EUI64 Never Reused: No 00:09:03.927 Namespace Write Protected: No 00:09:03.927 Number of LBA Formats: 8 00:09:03.927 Current LBA Format: LBA Format #04 00:09:03.927 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:03.927 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:03.927 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:03.927 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:03.927 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:03.927 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:03.927 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:03.927 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:03.927 00:09:04.186 00:13:18 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:09:04.186 00:13:18 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:09:04.186 ===================================================== 00:09:04.186 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:09:04.186 ===================================================== 00:09:04.186 Controller Capabilities/Features 00:09:04.186 ================================ 00:09:04.186 Vendor ID: 1b36 00:09:04.186 Subsystem Vendor ID: 1af4 00:09:04.186 Serial Number: 12340 00:09:04.186 Model Number: QEMU NVMe Ctrl 00:09:04.186 Firmware Version: 8.0.0 00:09:04.186 Recommended Arb Burst: 6 00:09:04.186 IEEE OUI Identifier: 00 54 52 00:09:04.186 Multi-path I/O 00:09:04.186 May have multiple subsystem ports: No 00:09:04.186 May have multiple controllers: No 00:09:04.186 Associated with SR-IOV VF: No 00:09:04.186 Max Data Transfer Size: 524288 00:09:04.186 Max Number of Namespaces: 256 00:09:04.186 Max Number of I/O Queues: 64 00:09:04.186 NVMe Specification Version (VS): 1.4 00:09:04.186 NVMe Specification Version (Identify): 1.4 00:09:04.186 Maximum Queue Entries: 2048 00:09:04.186 Contiguous Queues Required: Yes 00:09:04.186 Arbitration Mechanisms Supported 00:09:04.186 Weighted Round Robin: Not Supported 00:09:04.186 Vendor Specific: Not Supported 00:09:04.186 Reset Timeout: 7500 ms 00:09:04.186 Doorbell Stride: 4 bytes 00:09:04.186 NVM Subsystem Reset: Not Supported 00:09:04.186 Command Sets Supported 00:09:04.186 NVM Command Set: Supported 00:09:04.186 Boot Partition: Not Supported 00:09:04.186 Memory Page Size Minimum: 4096 bytes 00:09:04.186 Memory Page Size Maximum: 65536 bytes 00:09:04.186 Persistent Memory Region: Not Supported 00:09:04.186 Optional Asynchronous Events Supported 00:09:04.186 Namespace Attribute Notices: Supported 00:09:04.186 Firmware Activation Notices: Not Supported 00:09:04.186 ANA Change Notices: Not Supported 00:09:04.186 PLE Aggregate Log Change Notices: Not Supported 00:09:04.186 LBA Status Info Alert Notices: Not Supported 00:09:04.186 EGE Aggregate Log Change Notices: Not Supported 00:09:04.186 Normal NVM Subsystem Shutdown event: Not Supported 00:09:04.186 Zone Descriptor Change Notices: Not Supported 00:09:04.186 Discovery Log Change Notices: Not Supported 00:09:04.186 Controller Attributes 00:09:04.186 128-bit Host Identifier: Not Supported 00:09:04.187 Non-Operational Permissive Mode: Not Supported 00:09:04.187 NVM Sets: Not Supported 00:09:04.187 Read Recovery Levels: Not Supported 00:09:04.187 Endurance Groups: Not Supported 00:09:04.187 Predictable Latency Mode: Not Supported 00:09:04.187 Traffic Based Keep ALive: Not Supported 00:09:04.187 Namespace Granularity: Not Supported 00:09:04.187 SQ Associations: Not Supported 00:09:04.187 UUID List: Not Supported 00:09:04.187 Multi-Domain Subsystem: Not Supported 00:09:04.187 Fixed Capacity Management: Not Supported 00:09:04.187 Variable Capacity Management: Not Supported 00:09:04.187 Delete Endurance Group: Not Supported 00:09:04.187 Delete NVM Set: Not Supported 00:09:04.187 Extended LBA Formats Supported: Supported 00:09:04.187 Flexible Data Placement Supported: Not Supported 00:09:04.187 00:09:04.187 Controller Memory Buffer Support 00:09:04.187 ================================ 00:09:04.187 Supported: No 00:09:04.187 00:09:04.187 Persistent Memory Region Support 00:09:04.187 ================================ 00:09:04.187 Supported: No 00:09:04.187 00:09:04.187 Admin Command Set Attributes 00:09:04.187 ============================ 00:09:04.187 Security Send/Receive: Not Supported 00:09:04.187 Format NVM: Supported 00:09:04.187 Firmware Activate/Download: Not Supported 00:09:04.187 Namespace Management: Supported 00:09:04.187 Device Self-Test: Not Supported 00:09:04.187 Directives: Supported 00:09:04.187 NVMe-MI: Not Supported 00:09:04.187 Virtualization Management: Not Supported 00:09:04.187 Doorbell Buffer Config: Supported 00:09:04.187 Get LBA Status Capability: Not Supported 00:09:04.187 Command & Feature Lockdown Capability: Not Supported 00:09:04.187 Abort Command Limit: 4 00:09:04.187 Async Event Request Limit: 4 00:09:04.187 Number of Firmware Slots: N/A 00:09:04.187 Firmware Slot 1 Read-Only: N/A 00:09:04.187 Firmware Activation Without Reset: N/A 00:09:04.187 Multiple Update Detection Support: N/A 00:09:04.187 Firmware Update Granularity: No Information Provided 00:09:04.187 Per-Namespace SMART Log: Yes 00:09:04.187 Asymmetric Namespace Access Log Page: Not Supported 00:09:04.187 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:09:04.187 Command Effects Log Page: Supported 00:09:04.187 Get Log Page Extended Data: Supported 00:09:04.187 Telemetry Log Pages: Not Supported 00:09:04.187 Persistent Event Log Pages: Not Supported 00:09:04.187 Supported Log Pages Log Page: May Support 00:09:04.187 Commands Supported & Effects Log Page: Not Supported 00:09:04.187 Feature Identifiers & Effects Log Page:May Support 00:09:04.187 NVMe-MI Commands & Effects Log Page: May Support 00:09:04.187 Data Area 4 for Telemetry Log: Not Supported 00:09:04.187 Error Log Page Entries Supported: 1 00:09:04.187 Keep Alive: Not Supported 00:09:04.187 00:09:04.187 NVM Command Set Attributes 00:09:04.187 ========================== 00:09:04.187 Submission Queue Entry Size 00:09:04.187 Max: 64 00:09:04.187 Min: 64 00:09:04.187 Completion Queue Entry Size 00:09:04.187 Max: 16 00:09:04.187 Min: 16 00:09:04.187 Number of Namespaces: 256 00:09:04.187 Compare Command: Supported 00:09:04.187 Write Uncorrectable Command: Not Supported 00:09:04.187 Dataset Management Command: Supported 00:09:04.187 Write Zeroes Command: Supported 00:09:04.187 Set Features Save Field: Supported 00:09:04.187 Reservations: Not Supported 00:09:04.187 Timestamp: Supported 00:09:04.187 Copy: Supported 00:09:04.187 Volatile Write Cache: Present 00:09:04.187 Atomic Write Unit (Normal): 1 00:09:04.187 Atomic Write Unit (PFail): 1 00:09:04.187 Atomic Compare & Write Unit: 1 00:09:04.187 Fused Compare & Write: Not Supported 00:09:04.187 Scatter-Gather List 00:09:04.187 SGL Command Set: Supported 00:09:04.187 SGL Keyed: Not Supported 00:09:04.187 SGL Bit Bucket Descriptor: Not Supported 00:09:04.187 SGL Metadata Pointer: Not Supported 00:09:04.187 Oversized SGL: Not Supported 00:09:04.187 SGL Metadata Address: Not Supported 00:09:04.187 SGL Offset: Not Supported 00:09:04.187 Transport SGL Data Block: Not Supported 00:09:04.187 Replay Protected Memory Block: Not Supported 00:09:04.187 00:09:04.187 Firmware Slot Information 00:09:04.187 ========================= 00:09:04.187 Active slot: 1 00:09:04.187 Slot 1 Firmware Revision: 1.0 00:09:04.187 00:09:04.187 00:09:04.187 Commands Supported and Effects 00:09:04.187 ============================== 00:09:04.187 Admin Commands 00:09:04.187 -------------- 00:09:04.187 Delete I/O Submission Queue (00h): Supported 00:09:04.187 Create I/O Submission Queue (01h): Supported 00:09:04.187 Get Log Page (02h): Supported 00:09:04.187 Delete I/O Completion Queue (04h): Supported 00:09:04.187 Create I/O Completion Queue (05h): Supported 00:09:04.187 Identify (06h): Supported 00:09:04.187 Abort (08h): Supported 00:09:04.187 Set Features (09h): Supported 00:09:04.187 Get Features (0Ah): Supported 00:09:04.187 Asynchronous Event Request (0Ch): Supported 00:09:04.187 Namespace Attachment (15h): Supported NS-Inventory-Change 00:09:04.187 Directive Send (19h): Supported 00:09:04.187 Directive Receive (1Ah): Supported 00:09:04.187 Virtualization Management (1Ch): Supported 00:09:04.187 Doorbell Buffer Config (7Ch): Supported 00:09:04.187 Format NVM (80h): Supported LBA-Change 00:09:04.187 I/O Commands 00:09:04.187 ------------ 00:09:04.187 Flush (00h): Supported LBA-Change 00:09:04.187 Write (01h): Supported LBA-Change 00:09:04.187 Read (02h): Supported 00:09:04.187 Compare (05h): Supported 00:09:04.187 Write Zeroes (08h): Supported LBA-Change 00:09:04.187 Dataset Management (09h): Supported LBA-Change 00:09:04.187 Unknown (0Ch): Supported 00:09:04.187 Unknown (12h): Supported 00:09:04.187 Copy (19h): Supported LBA-Change 00:09:04.187 Unknown (1Dh): Supported LBA-Change 00:09:04.187 00:09:04.187 Error Log 00:09:04.187 ========= 00:09:04.187 00:09:04.187 Arbitration 00:09:04.187 =========== 00:09:04.187 Arbitration Burst: no limit 00:09:04.187 00:09:04.187 Power Management 00:09:04.187 ================ 00:09:04.187 Number of Power States: 1 00:09:04.187 Current Power State: Power State #0 00:09:04.187 Power State #0: 00:09:04.187 Max Power: 25.00 W 00:09:04.187 Non-Operational State: Operational 00:09:04.187 Entry Latency: 16 microseconds 00:09:04.187 Exit Latency: 4 microseconds 00:09:04.187 Relative Read Throughput: 0 00:09:04.187 Relative Read Latency: 0 00:09:04.187 Relative Write Throughput: 0 00:09:04.187 Relative Write Latency: 0 00:09:04.187 Idle Power: Not Reported 00:09:04.187 Active Power: Not Reported 00:09:04.187 Non-Operational Permissive Mode: Not Supported 00:09:04.187 00:09:04.187 Health Information 00:09:04.187 ================== 00:09:04.187 Critical Warnings: 00:09:04.187 Available Spare Space: OK 00:09:04.187 Temperature: OK 00:09:04.187 Device Reliability: OK 00:09:04.187 Read Only: No 00:09:04.187 Volatile Memory Backup: OK 00:09:04.187 Current Temperature: 323 Kelvin (50 Celsius) 00:09:04.187 Temperature Threshold: 343 Kelvin (70 Celsius) 00:09:04.187 Available Spare: 0% 00:09:04.187 Available Spare Threshold: 0% 00:09:04.187 Life Percentage Used: 0% 00:09:04.187 Data Units Read: 1199 00:09:04.187 Data Units Written: 1032 00:09:04.187 Host Read Commands: 55646 00:09:04.187 Host Write Commands: 54149 00:09:04.187 Controller Busy Time: 0 minutes 00:09:04.187 Power Cycles: 0 00:09:04.187 Power On Hours: 0 hours 00:09:04.187 Unsafe Shutdowns: 0 00:09:04.187 Unrecoverable Media Errors: 0 00:09:04.187 Lifetime Error Log Entries: 0 00:09:04.187 Warning Temperature Time: 0 minutes 00:09:04.187 Critical Temperature Time: 0 minutes 00:09:04.187 00:09:04.187 Number of Queues 00:09:04.187 ================ 00:09:04.187 Number of I/O Submission Queues: 64 00:09:04.187 Number of I/O Completion Queues: 64 00:09:04.187 00:09:04.187 ZNS Specific Controller Data 00:09:04.187 ============================ 00:09:04.187 Zone Append Size Limit: 0 00:09:04.187 00:09:04.187 00:09:04.187 Active Namespaces 00:09:04.187 ================= 00:09:04.187 Namespace ID:1 00:09:04.187 Error Recovery Timeout: Unlimited 00:09:04.187 Command Set Identifier: NVM (00h) 00:09:04.187 Deallocate: Supported 00:09:04.187 Deallocated/Unwritten Error: Supported 00:09:04.187 Deallocated Read Value: All 0x00 00:09:04.187 Deallocate in Write Zeroes: Not Supported 00:09:04.188 Deallocated Guard Field: 0xFFFF 00:09:04.188 Flush: Supported 00:09:04.188 Reservation: Not Supported 00:09:04.188 Metadata Transferred as: Separate Metadata Buffer 00:09:04.188 Namespace Sharing Capabilities: Private 00:09:04.188 Size (in LBAs): 1548666 (5GiB) 00:09:04.188 Capacity (in LBAs): 1548666 (5GiB) 00:09:04.188 Utilization (in LBAs): 1548666 (5GiB) 00:09:04.188 Thin Provisioning: Not Supported 00:09:04.188 Per-NS Atomic Units: No 00:09:04.188 Maximum Single Source Range Length: 128 00:09:04.188 Maximum Copy Length: 128 00:09:04.188 Maximum Source Range Count: 128 00:09:04.188 NGUID/EUI64 Never Reused: No 00:09:04.188 Namespace Write Protected: No 00:09:04.188 Number of LBA Formats: 8 00:09:04.188 Current LBA Format: LBA Format #07 00:09:04.188 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:04.188 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:04.188 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:04.188 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:04.188 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:04.188 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:04.188 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:04.188 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:04.188 00:09:04.188 00:13:18 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:09:04.188 00:13:18 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' -i 0 00:09:04.447 ===================================================== 00:09:04.447 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:09:04.447 ===================================================== 00:09:04.447 Controller Capabilities/Features 00:09:04.447 ================================ 00:09:04.447 Vendor ID: 1b36 00:09:04.447 Subsystem Vendor ID: 1af4 00:09:04.447 Serial Number: 12341 00:09:04.447 Model Number: QEMU NVMe Ctrl 00:09:04.447 Firmware Version: 8.0.0 00:09:04.447 Recommended Arb Burst: 6 00:09:04.447 IEEE OUI Identifier: 00 54 52 00:09:04.447 Multi-path I/O 00:09:04.447 May have multiple subsystem ports: No 00:09:04.447 May have multiple controllers: No 00:09:04.447 Associated with SR-IOV VF: No 00:09:04.447 Max Data Transfer Size: 524288 00:09:04.447 Max Number of Namespaces: 256 00:09:04.447 Max Number of I/O Queues: 64 00:09:04.447 NVMe Specification Version (VS): 1.4 00:09:04.447 NVMe Specification Version (Identify): 1.4 00:09:04.447 Maximum Queue Entries: 2048 00:09:04.447 Contiguous Queues Required: Yes 00:09:04.447 Arbitration Mechanisms Supported 00:09:04.447 Weighted Round Robin: Not Supported 00:09:04.447 Vendor Specific: Not Supported 00:09:04.447 Reset Timeout: 7500 ms 00:09:04.447 Doorbell Stride: 4 bytes 00:09:04.447 NVM Subsystem Reset: Not Supported 00:09:04.447 Command Sets Supported 00:09:04.447 NVM Command Set: Supported 00:09:04.447 Boot Partition: Not Supported 00:09:04.447 Memory Page Size Minimum: 4096 bytes 00:09:04.447 Memory Page Size Maximum: 65536 bytes 00:09:04.447 Persistent Memory Region: Not Supported 00:09:04.447 Optional Asynchronous Events Supported 00:09:04.447 Namespace Attribute Notices: Supported 00:09:04.447 Firmware Activation Notices: Not Supported 00:09:04.447 ANA Change Notices: Not Supported 00:09:04.447 PLE Aggregate Log Change Notices: Not Supported 00:09:04.447 LBA Status Info Alert Notices: Not Supported 00:09:04.447 EGE Aggregate Log Change Notices: Not Supported 00:09:04.447 Normal NVM Subsystem Shutdown event: Not Supported 00:09:04.447 Zone Descriptor Change Notices: Not Supported 00:09:04.447 Discovery Log Change Notices: Not Supported 00:09:04.447 Controller Attributes 00:09:04.447 128-bit Host Identifier: Not Supported 00:09:04.447 Non-Operational Permissive Mode: Not Supported 00:09:04.447 NVM Sets: Not Supported 00:09:04.447 Read Recovery Levels: Not Supported 00:09:04.447 Endurance Groups: Not Supported 00:09:04.448 Predictable Latency Mode: Not Supported 00:09:04.448 Traffic Based Keep ALive: Not Supported 00:09:04.448 Namespace Granularity: Not Supported 00:09:04.448 SQ Associations: Not Supported 00:09:04.448 UUID List: Not Supported 00:09:04.448 Multi-Domain Subsystem: Not Supported 00:09:04.448 Fixed Capacity Management: Not Supported 00:09:04.448 Variable Capacity Management: Not Supported 00:09:04.448 Delete Endurance Group: Not Supported 00:09:04.448 Delete NVM Set: Not Supported 00:09:04.448 Extended LBA Formats Supported: Supported 00:09:04.448 Flexible Data Placement Supported: Not Supported 00:09:04.448 00:09:04.448 Controller Memory Buffer Support 00:09:04.448 ================================ 00:09:04.448 Supported: No 00:09:04.448 00:09:04.448 Persistent Memory Region Support 00:09:04.448 ================================ 00:09:04.448 Supported: No 00:09:04.448 00:09:04.448 Admin Command Set Attributes 00:09:04.448 ============================ 00:09:04.448 Security Send/Receive: Not Supported 00:09:04.448 Format NVM: Supported 00:09:04.448 Firmware Activate/Download: Not Supported 00:09:04.448 Namespace Management: Supported 00:09:04.448 Device Self-Test: Not Supported 00:09:04.448 Directives: Supported 00:09:04.448 NVMe-MI: Not Supported 00:09:04.448 Virtualization Management: Not Supported 00:09:04.448 Doorbell Buffer Config: Supported 00:09:04.448 Get LBA Status Capability: Not Supported 00:09:04.448 Command & Feature Lockdown Capability: Not Supported 00:09:04.448 Abort Command Limit: 4 00:09:04.448 Async Event Request Limit: 4 00:09:04.448 Number of Firmware Slots: N/A 00:09:04.448 Firmware Slot 1 Read-Only: N/A 00:09:04.448 Firmware Activation Without Reset: N/A 00:09:04.448 Multiple Update Detection Support: N/A 00:09:04.448 Firmware Update Granularity: No Information Provided 00:09:04.448 Per-Namespace SMART Log: Yes 00:09:04.448 Asymmetric Namespace Access Log Page: Not Supported 00:09:04.448 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:09:04.448 Command Effects Log Page: Supported 00:09:04.448 Get Log Page Extended Data: Supported 00:09:04.448 Telemetry Log Pages: Not Supported 00:09:04.448 Persistent Event Log Pages: Not Supported 00:09:04.448 Supported Log Pages Log Page: May Support 00:09:04.448 Commands Supported & Effects Log Page: Not Supported 00:09:04.448 Feature Identifiers & Effects Log Page:May Support 00:09:04.448 NVMe-MI Commands & Effects Log Page: May Support 00:09:04.448 Data Area 4 for Telemetry Log: Not Supported 00:09:04.448 Error Log Page Entries Supported: 1 00:09:04.448 Keep Alive: Not Supported 00:09:04.448 00:09:04.448 NVM Command Set Attributes 00:09:04.448 ========================== 00:09:04.448 Submission Queue Entry Size 00:09:04.448 Max: 64 00:09:04.448 Min: 64 00:09:04.448 Completion Queue Entry Size 00:09:04.448 Max: 16 00:09:04.448 Min: 16 00:09:04.448 Number of Namespaces: 256 00:09:04.448 Compare Command: Supported 00:09:04.448 Write Uncorrectable Command: Not Supported 00:09:04.448 Dataset Management Command: Supported 00:09:04.448 Write Zeroes Command: Supported 00:09:04.448 Set Features Save Field: Supported 00:09:04.448 Reservations: Not Supported 00:09:04.448 Timestamp: Supported 00:09:04.448 Copy: Supported 00:09:04.448 Volatile Write Cache: Present 00:09:04.448 Atomic Write Unit (Normal): 1 00:09:04.448 Atomic Write Unit (PFail): 1 00:09:04.448 Atomic Compare & Write Unit: 1 00:09:04.448 Fused Compare & Write: Not Supported 00:09:04.448 Scatter-Gather List 00:09:04.448 SGL Command Set: Supported 00:09:04.448 SGL Keyed: Not Supported 00:09:04.448 SGL Bit Bucket Descriptor: Not Supported 00:09:04.448 SGL Metadata Pointer: Not Supported 00:09:04.448 Oversized SGL: Not Supported 00:09:04.448 SGL Metadata Address: Not Supported 00:09:04.448 SGL Offset: Not Supported 00:09:04.448 Transport SGL Data Block: Not Supported 00:09:04.448 Replay Protected Memory Block: Not Supported 00:09:04.448 00:09:04.448 Firmware Slot Information 00:09:04.448 ========================= 00:09:04.448 Active slot: 1 00:09:04.448 Slot 1 Firmware Revision: 1.0 00:09:04.448 00:09:04.448 00:09:04.448 Commands Supported and Effects 00:09:04.448 ============================== 00:09:04.448 Admin Commands 00:09:04.448 -------------- 00:09:04.448 Delete I/O Submission Queue (00h): Supported 00:09:04.448 Create I/O Submission Queue (01h): Supported 00:09:04.448 Get Log Page (02h): Supported 00:09:04.448 Delete I/O Completion Queue (04h): Supported 00:09:04.448 Create I/O Completion Queue (05h): Supported 00:09:04.448 Identify (06h): Supported 00:09:04.448 Abort (08h): Supported 00:09:04.448 Set Features (09h): Supported 00:09:04.448 Get Features (0Ah): Supported 00:09:04.448 Asynchronous Event Request (0Ch): Supported 00:09:04.448 Namespace Attachment (15h): Supported NS-Inventory-Change 00:09:04.448 Directive Send (19h): Supported 00:09:04.448 Directive Receive (1Ah): Supported 00:09:04.448 Virtualization Management (1Ch): Supported 00:09:04.448 Doorbell Buffer Config (7Ch): Supported 00:09:04.448 Format NVM (80h): Supported LBA-Change 00:09:04.448 I/O Commands 00:09:04.448 ------------ 00:09:04.448 Flush (00h): Supported LBA-Change 00:09:04.448 Write (01h): Supported LBA-Change 00:09:04.448 Read (02h): Supported 00:09:04.448 Compare (05h): Supported 00:09:04.448 Write Zeroes (08h): Supported LBA-Change 00:09:04.448 Dataset Management (09h): Supported LBA-Change 00:09:04.448 Unknown (0Ch): Supported 00:09:04.448 Unknown (12h): Supported 00:09:04.448 Copy (19h): Supported LBA-Change 00:09:04.448 Unknown (1Dh): Supported LBA-Change 00:09:04.448 00:09:04.448 Error Log 00:09:04.448 ========= 00:09:04.448 00:09:04.448 Arbitration 00:09:04.448 =========== 00:09:04.448 Arbitration Burst: no limit 00:09:04.448 00:09:04.448 Power Management 00:09:04.448 ================ 00:09:04.448 Number of Power States: 1 00:09:04.448 Current Power State: Power State #0 00:09:04.448 Power State #0: 00:09:04.448 Max Power: 25.00 W 00:09:04.448 Non-Operational State: Operational 00:09:04.448 Entry Latency: 16 microseconds 00:09:04.448 Exit Latency: 4 microseconds 00:09:04.448 Relative Read Throughput: 0 00:09:04.448 Relative Read Latency: 0 00:09:04.448 Relative Write Throughput: 0 00:09:04.448 Relative Write Latency: 0 00:09:04.448 Idle Power: Not Reported 00:09:04.448 Active Power: Not Reported 00:09:04.448 Non-Operational Permissive Mode: Not Supported 00:09:04.448 00:09:04.448 Health Information 00:09:04.448 ================== 00:09:04.448 Critical Warnings: 00:09:04.448 Available Spare Space: OK 00:09:04.448 Temperature: OK 00:09:04.448 Device Reliability: OK 00:09:04.448 Read Only: No 00:09:04.448 Volatile Memory Backup: OK 00:09:04.448 Current Temperature: 323 Kelvin (50 Celsius) 00:09:04.448 Temperature Threshold: 343 Kelvin (70 Celsius) 00:09:04.448 Available Spare: 0% 00:09:04.448 Available Spare Threshold: 0% 00:09:04.448 Life Percentage Used: 0% 00:09:04.448 Data Units Read: 893 00:09:04.448 Data Units Written: 746 00:09:04.448 Host Read Commands: 39738 00:09:04.448 Host Write Commands: 37556 00:09:04.448 Controller Busy Time: 0 minutes 00:09:04.448 Power Cycles: 0 00:09:04.448 Power On Hours: 0 hours 00:09:04.448 Unsafe Shutdowns: 0 00:09:04.448 Unrecoverable Media Errors: 0 00:09:04.448 Lifetime Error Log Entries: 0 00:09:04.448 Warning Temperature Time: 0 minutes 00:09:04.448 Critical Temperature Time: 0 minutes 00:09:04.448 00:09:04.448 Number of Queues 00:09:04.448 ================ 00:09:04.448 Number of I/O Submission Queues: 64 00:09:04.448 Number of I/O Completion Queues: 64 00:09:04.448 00:09:04.448 ZNS Specific Controller Data 00:09:04.448 ============================ 00:09:04.448 Zone Append Size Limit: 0 00:09:04.448 00:09:04.448 00:09:04.448 Active Namespaces 00:09:04.448 ================= 00:09:04.448 Namespace ID:1 00:09:04.448 Error Recovery Timeout: Unlimited 00:09:04.448 Command Set Identifier: NVM (00h) 00:09:04.448 Deallocate: Supported 00:09:04.448 Deallocated/Unwritten Error: Supported 00:09:04.448 Deallocated Read Value: All 0x00 00:09:04.448 Deallocate in Write Zeroes: Not Supported 00:09:04.448 Deallocated Guard Field: 0xFFFF 00:09:04.448 Flush: Supported 00:09:04.448 Reservation: Not Supported 00:09:04.448 Namespace Sharing Capabilities: Private 00:09:04.448 Size (in LBAs): 1310720 (5GiB) 00:09:04.448 Capacity (in LBAs): 1310720 (5GiB) 00:09:04.448 Utilization (in LBAs): 1310720 (5GiB) 00:09:04.448 Thin Provisioning: Not Supported 00:09:04.449 Per-NS Atomic Units: No 00:09:04.449 Maximum Single Source Range Length: 128 00:09:04.449 Maximum Copy Length: 128 00:09:04.449 Maximum Source Range Count: 128 00:09:04.449 NGUID/EUI64 Never Reused: No 00:09:04.449 Namespace Write Protected: No 00:09:04.449 Number of LBA Formats: 8 00:09:04.449 Current LBA Format: LBA Format #04 00:09:04.449 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:04.449 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:04.449 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:04.449 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:04.449 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:04.449 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:04.449 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:04.449 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:04.449 00:09:04.708 00:13:19 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:09:04.708 00:13:19 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' -i 0 00:09:04.708 ===================================================== 00:09:04.708 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:09:04.708 ===================================================== 00:09:04.708 Controller Capabilities/Features 00:09:04.708 ================================ 00:09:04.708 Vendor ID: 1b36 00:09:04.708 Subsystem Vendor ID: 1af4 00:09:04.708 Serial Number: 12342 00:09:04.708 Model Number: QEMU NVMe Ctrl 00:09:04.708 Firmware Version: 8.0.0 00:09:04.708 Recommended Arb Burst: 6 00:09:04.708 IEEE OUI Identifier: 00 54 52 00:09:04.708 Multi-path I/O 00:09:04.708 May have multiple subsystem ports: No 00:09:04.708 May have multiple controllers: No 00:09:04.708 Associated with SR-IOV VF: No 00:09:04.708 Max Data Transfer Size: 524288 00:09:04.708 Max Number of Namespaces: 256 00:09:04.708 Max Number of I/O Queues: 64 00:09:04.708 NVMe Specification Version (VS): 1.4 00:09:04.708 NVMe Specification Version (Identify): 1.4 00:09:04.708 Maximum Queue Entries: 2048 00:09:04.708 Contiguous Queues Required: Yes 00:09:04.708 Arbitration Mechanisms Supported 00:09:04.708 Weighted Round Robin: Not Supported 00:09:04.708 Vendor Specific: Not Supported 00:09:04.708 Reset Timeout: 7500 ms 00:09:04.708 Doorbell Stride: 4 bytes 00:09:04.708 NVM Subsystem Reset: Not Supported 00:09:04.708 Command Sets Supported 00:09:04.708 NVM Command Set: Supported 00:09:04.708 Boot Partition: Not Supported 00:09:04.708 Memory Page Size Minimum: 4096 bytes 00:09:04.708 Memory Page Size Maximum: 65536 bytes 00:09:04.708 Persistent Memory Region: Not Supported 00:09:04.708 Optional Asynchronous Events Supported 00:09:04.708 Namespace Attribute Notices: Supported 00:09:04.708 Firmware Activation Notices: Not Supported 00:09:04.708 ANA Change Notices: Not Supported 00:09:04.708 PLE Aggregate Log Change Notices: Not Supported 00:09:04.708 LBA Status Info Alert Notices: Not Supported 00:09:04.708 EGE Aggregate Log Change Notices: Not Supported 00:09:04.708 Normal NVM Subsystem Shutdown event: Not Supported 00:09:04.708 Zone Descriptor Change Notices: Not Supported 00:09:04.708 Discovery Log Change Notices: Not Supported 00:09:04.708 Controller Attributes 00:09:04.708 128-bit Host Identifier: Not Supported 00:09:04.708 Non-Operational Permissive Mode: Not Supported 00:09:04.708 NVM Sets: Not Supported 00:09:04.708 Read Recovery Levels: Not Supported 00:09:04.708 Endurance Groups: Not Supported 00:09:04.708 Predictable Latency Mode: Not Supported 00:09:04.708 Traffic Based Keep ALive: Not Supported 00:09:04.708 Namespace Granularity: Not Supported 00:09:04.708 SQ Associations: Not Supported 00:09:04.708 UUID List: Not Supported 00:09:04.708 Multi-Domain Subsystem: Not Supported 00:09:04.708 Fixed Capacity Management: Not Supported 00:09:04.708 Variable Capacity Management: Not Supported 00:09:04.708 Delete Endurance Group: Not Supported 00:09:04.708 Delete NVM Set: Not Supported 00:09:04.708 Extended LBA Formats Supported: Supported 00:09:04.708 Flexible Data Placement Supported: Not Supported 00:09:04.708 00:09:04.708 Controller Memory Buffer Support 00:09:04.708 ================================ 00:09:04.708 Supported: No 00:09:04.708 00:09:04.708 Persistent Memory Region Support 00:09:04.708 ================================ 00:09:04.708 Supported: No 00:09:04.708 00:09:04.708 Admin Command Set Attributes 00:09:04.708 ============================ 00:09:04.708 Security Send/Receive: Not Supported 00:09:04.708 Format NVM: Supported 00:09:04.708 Firmware Activate/Download: Not Supported 00:09:04.708 Namespace Management: Supported 00:09:04.709 Device Self-Test: Not Supported 00:09:04.709 Directives: Supported 00:09:04.709 NVMe-MI: Not Supported 00:09:04.709 Virtualization Management: Not Supported 00:09:04.709 Doorbell Buffer Config: Supported 00:09:04.709 Get LBA Status Capability: Not Supported 00:09:04.709 Command & Feature Lockdown Capability: Not Supported 00:09:04.709 Abort Command Limit: 4 00:09:04.709 Async Event Request Limit: 4 00:09:04.709 Number of Firmware Slots: N/A 00:09:04.709 Firmware Slot 1 Read-Only: N/A 00:09:04.709 Firmware Activation Without Reset: N/A 00:09:04.709 Multiple Update Detection Support: N/A 00:09:04.709 Firmware Update Granularity: No Information Provided 00:09:04.709 Per-Namespace SMART Log: Yes 00:09:04.709 Asymmetric Namespace Access Log Page: Not Supported 00:09:04.709 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:09:04.709 Command Effects Log Page: Supported 00:09:04.709 Get Log Page Extended Data: Supported 00:09:04.709 Telemetry Log Pages: Not Supported 00:09:04.709 Persistent Event Log Pages: Not Supported 00:09:04.709 Supported Log Pages Log Page: May Support 00:09:04.709 Commands Supported & Effects Log Page: Not Supported 00:09:04.709 Feature Identifiers & Effects Log Page:May Support 00:09:04.709 NVMe-MI Commands & Effects Log Page: May Support 00:09:04.709 Data Area 4 for Telemetry Log: Not Supported 00:09:04.709 Error Log Page Entries Supported: 1 00:09:04.709 Keep Alive: Not Supported 00:09:04.709 00:09:04.709 NVM Command Set Attributes 00:09:04.709 ========================== 00:09:04.709 Submission Queue Entry Size 00:09:04.709 Max: 64 00:09:04.709 Min: 64 00:09:04.709 Completion Queue Entry Size 00:09:04.709 Max: 16 00:09:04.709 Min: 16 00:09:04.709 Number of Namespaces: 256 00:09:04.709 Compare Command: Supported 00:09:04.709 Write Uncorrectable Command: Not Supported 00:09:04.709 Dataset Management Command: Supported 00:09:04.709 Write Zeroes Command: Supported 00:09:04.709 Set Features Save Field: Supported 00:09:04.709 Reservations: Not Supported 00:09:04.709 Timestamp: Supported 00:09:04.709 Copy: Supported 00:09:04.709 Volatile Write Cache: Present 00:09:04.709 Atomic Write Unit (Normal): 1 00:09:04.709 Atomic Write Unit (PFail): 1 00:09:04.709 Atomic Compare & Write Unit: 1 00:09:04.709 Fused Compare & Write: Not Supported 00:09:04.709 Scatter-Gather List 00:09:04.709 SGL Command Set: Supported 00:09:04.709 SGL Keyed: Not Supported 00:09:04.709 SGL Bit Bucket Descriptor: Not Supported 00:09:04.709 SGL Metadata Pointer: Not Supported 00:09:04.709 Oversized SGL: Not Supported 00:09:04.709 SGL Metadata Address: Not Supported 00:09:04.709 SGL Offset: Not Supported 00:09:04.709 Transport SGL Data Block: Not Supported 00:09:04.709 Replay Protected Memory Block: Not Supported 00:09:04.709 00:09:04.709 Firmware Slot Information 00:09:04.709 ========================= 00:09:04.709 Active slot: 1 00:09:04.709 Slot 1 Firmware Revision: 1.0 00:09:04.709 00:09:04.709 00:09:04.709 Commands Supported and Effects 00:09:04.709 ============================== 00:09:04.709 Admin Commands 00:09:04.709 -------------- 00:09:04.709 Delete I/O Submission Queue (00h): Supported 00:09:04.709 Create I/O Submission Queue (01h): Supported 00:09:04.709 Get Log Page (02h): Supported 00:09:04.709 Delete I/O Completion Queue (04h): Supported 00:09:04.709 Create I/O Completion Queue (05h): Supported 00:09:04.709 Identify (06h): Supported 00:09:04.709 Abort (08h): Supported 00:09:04.709 Set Features (09h): Supported 00:09:04.709 Get Features (0Ah): Supported 00:09:04.709 Asynchronous Event Request (0Ch): Supported 00:09:04.709 Namespace Attachment (15h): Supported NS-Inventory-Change 00:09:04.709 Directive Send (19h): Supported 00:09:04.709 Directive Receive (1Ah): Supported 00:09:04.709 Virtualization Management (1Ch): Supported 00:09:04.709 Doorbell Buffer Config (7Ch): Supported 00:09:04.709 Format NVM (80h): Supported LBA-Change 00:09:04.709 I/O Commands 00:09:04.709 ------------ 00:09:04.709 Flush (00h): Supported LBA-Change 00:09:04.709 Write (01h): Supported LBA-Change 00:09:04.709 Read (02h): Supported 00:09:04.709 Compare (05h): Supported 00:09:04.709 Write Zeroes (08h): Supported LBA-Change 00:09:04.709 Dataset Management (09h): Supported LBA-Change 00:09:04.709 Unknown (0Ch): Supported 00:09:04.709 Unknown (12h): Supported 00:09:04.709 Copy (19h): Supported LBA-Change 00:09:04.709 Unknown (1Dh): Supported LBA-Change 00:09:04.709 00:09:04.709 Error Log 00:09:04.709 ========= 00:09:04.709 00:09:04.709 Arbitration 00:09:04.709 =========== 00:09:04.709 Arbitration Burst: no limit 00:09:04.709 00:09:04.709 Power Management 00:09:04.709 ================ 00:09:04.709 Number of Power States: 1 00:09:04.709 Current Power State: Power State #0 00:09:04.709 Power State #0: 00:09:04.709 Max Power: 25.00 W 00:09:04.709 Non-Operational State: Operational 00:09:04.709 Entry Latency: 16 microseconds 00:09:04.709 Exit Latency: 4 microseconds 00:09:04.709 Relative Read Throughput: 0 00:09:04.709 Relative Read Latency: 0 00:09:04.709 Relative Write Throughput: 0 00:09:04.709 Relative Write Latency: 0 00:09:04.709 Idle Power: Not Reported 00:09:04.709 Active Power: Not Reported 00:09:04.709 Non-Operational Permissive Mode: Not Supported 00:09:04.709 00:09:04.709 Health Information 00:09:04.709 ================== 00:09:04.709 Critical Warnings: 00:09:04.709 Available Spare Space: OK 00:09:04.709 Temperature: OK 00:09:04.709 Device Reliability: OK 00:09:04.709 Read Only: No 00:09:04.709 Volatile Memory Backup: OK 00:09:04.709 Current Temperature: 323 Kelvin (50 Celsius) 00:09:04.709 Temperature Threshold: 343 Kelvin (70 Celsius) 00:09:04.709 Available Spare: 0% 00:09:04.709 Available Spare Threshold: 0% 00:09:04.709 Life Percentage Used: 0% 00:09:04.709 Data Units Read: 2612 00:09:04.709 Data Units Written: 2292 00:09:04.709 Host Read Commands: 117231 00:09:04.709 Host Write Commands: 113001 00:09:04.709 Controller Busy Time: 0 minutes 00:09:04.709 Power Cycles: 0 00:09:04.709 Power On Hours: 0 hours 00:09:04.709 Unsafe Shutdowns: 0 00:09:04.709 Unrecoverable Media Errors: 0 00:09:04.709 Lifetime Error Log Entries: 0 00:09:04.709 Warning Temperature Time: 0 minutes 00:09:04.709 Critical Temperature Time: 0 minutes 00:09:04.709 00:09:04.709 Number of Queues 00:09:04.709 ================ 00:09:04.709 Number of I/O Submission Queues: 64 00:09:04.709 Number of I/O Completion Queues: 64 00:09:04.709 00:09:04.709 ZNS Specific Controller Data 00:09:04.709 ============================ 00:09:04.709 Zone Append Size Limit: 0 00:09:04.709 00:09:04.709 00:09:04.709 Active Namespaces 00:09:04.709 ================= 00:09:04.709 Namespace ID:1 00:09:04.709 Error Recovery Timeout: Unlimited 00:09:04.709 Command Set Identifier: NVM (00h) 00:09:04.709 Deallocate: Supported 00:09:04.709 Deallocated/Unwritten Error: Supported 00:09:04.709 Deallocated Read Value: All 0x00 00:09:04.709 Deallocate in Write Zeroes: Not Supported 00:09:04.709 Deallocated Guard Field: 0xFFFF 00:09:04.709 Flush: Supported 00:09:04.709 Reservation: Not Supported 00:09:04.709 Namespace Sharing Capabilities: Private 00:09:04.709 Size (in LBAs): 1048576 (4GiB) 00:09:04.709 Capacity (in LBAs): 1048576 (4GiB) 00:09:04.709 Utilization (in LBAs): 1048576 (4GiB) 00:09:04.709 Thin Provisioning: Not Supported 00:09:04.709 Per-NS Atomic Units: No 00:09:04.709 Maximum Single Source Range Length: 128 00:09:04.709 Maximum Copy Length: 128 00:09:04.709 Maximum Source Range Count: 128 00:09:04.709 NGUID/EUI64 Never Reused: No 00:09:04.709 Namespace Write Protected: No 00:09:04.709 Number of LBA Formats: 8 00:09:04.709 Current LBA Format: LBA Format #04 00:09:04.709 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:04.709 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:04.709 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:04.709 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:04.709 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:04.709 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:04.709 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:04.709 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:04.709 00:09:04.709 Namespace ID:2 00:09:04.709 Error Recovery Timeout: Unlimited 00:09:04.709 Command Set Identifier: NVM (00h) 00:09:04.710 Deallocate: Supported 00:09:04.710 Deallocated/Unwritten Error: Supported 00:09:04.710 Deallocated Read Value: All 0x00 00:09:04.710 Deallocate in Write Zeroes: Not Supported 00:09:04.710 Deallocated Guard Field: 0xFFFF 00:09:04.710 Flush: Supported 00:09:04.710 Reservation: Not Supported 00:09:04.710 Namespace Sharing Capabilities: Private 00:09:04.710 Size (in LBAs): 1048576 (4GiB) 00:09:04.710 Capacity (in LBAs): 1048576 (4GiB) 00:09:04.710 Utilization (in LBAs): 1048576 (4GiB) 00:09:04.710 Thin Provisioning: Not Supported 00:09:04.710 Per-NS Atomic Units: No 00:09:04.710 Maximum Single Source Range Length: 128 00:09:04.710 Maximum Copy Length: 128 00:09:04.710 Maximum Source Range Count: 128 00:09:04.710 NGUID/EUI64 Never Reused: No 00:09:04.710 Namespace Write Protected: No 00:09:04.710 Number of LBA Formats: 8 00:09:04.710 Current LBA Format: LBA Format #04 00:09:04.710 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:04.710 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:04.710 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:04.710 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:04.710 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:04.969 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:04.969 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:04.970 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:04.970 00:09:04.970 Namespace ID:3 00:09:04.970 Error Recovery Timeout: Unlimited 00:09:04.970 Command Set Identifier: NVM (00h) 00:09:04.970 Deallocate: Supported 00:09:04.970 Deallocated/Unwritten Error: Supported 00:09:04.970 Deallocated Read Value: All 0x00 00:09:04.970 Deallocate in Write Zeroes: Not Supported 00:09:04.970 Deallocated Guard Field: 0xFFFF 00:09:04.970 Flush: Supported 00:09:04.970 Reservation: Not Supported 00:09:04.970 Namespace Sharing Capabilities: Private 00:09:04.970 Size (in LBAs): 1048576 (4GiB) 00:09:04.970 Capacity (in LBAs): 1048576 (4GiB) 00:09:04.970 Utilization (in LBAs): 1048576 (4GiB) 00:09:04.970 Thin Provisioning: Not Supported 00:09:04.970 Per-NS Atomic Units: No 00:09:04.970 Maximum Single Source Range Length: 128 00:09:04.970 Maximum Copy Length: 128 00:09:04.970 Maximum Source Range Count: 128 00:09:04.970 NGUID/EUI64 Never Reused: No 00:09:04.970 Namespace Write Protected: No 00:09:04.970 Number of LBA Formats: 8 00:09:04.970 Current LBA Format: LBA Format #04 00:09:04.970 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:04.970 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:04.970 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:04.970 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:04.970 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:04.970 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:04.970 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:04.970 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:04.970 00:09:04.970 00:13:19 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:09:04.970 00:13:19 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' -i 0 00:09:04.970 ===================================================== 00:09:04.970 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:09:04.970 ===================================================== 00:09:04.970 Controller Capabilities/Features 00:09:04.970 ================================ 00:09:04.970 Vendor ID: 1b36 00:09:04.970 Subsystem Vendor ID: 1af4 00:09:04.970 Serial Number: 12343 00:09:04.970 Model Number: QEMU NVMe Ctrl 00:09:04.970 Firmware Version: 8.0.0 00:09:04.970 Recommended Arb Burst: 6 00:09:04.970 IEEE OUI Identifier: 00 54 52 00:09:04.970 Multi-path I/O 00:09:04.970 May have multiple subsystem ports: No 00:09:04.970 May have multiple controllers: Yes 00:09:04.970 Associated with SR-IOV VF: No 00:09:04.970 Max Data Transfer Size: 524288 00:09:04.970 Max Number of Namespaces: 256 00:09:04.970 Max Number of I/O Queues: 64 00:09:04.970 NVMe Specification Version (VS): 1.4 00:09:04.970 NVMe Specification Version (Identify): 1.4 00:09:04.970 Maximum Queue Entries: 2048 00:09:04.970 Contiguous Queues Required: Yes 00:09:04.970 Arbitration Mechanisms Supported 00:09:04.970 Weighted Round Robin: Not Supported 00:09:04.970 Vendor Specific: Not Supported 00:09:04.970 Reset Timeout: 7500 ms 00:09:04.970 Doorbell Stride: 4 bytes 00:09:04.970 NVM Subsystem Reset: Not Supported 00:09:04.970 Command Sets Supported 00:09:04.970 NVM Command Set: Supported 00:09:04.970 Boot Partition: Not Supported 00:09:04.970 Memory Page Size Minimum: 4096 bytes 00:09:04.970 Memory Page Size Maximum: 65536 bytes 00:09:04.970 Persistent Memory Region: Not Supported 00:09:04.970 Optional Asynchronous Events Supported 00:09:04.970 Namespace Attribute Notices: Supported 00:09:04.970 Firmware Activation Notices: Not Supported 00:09:04.970 ANA Change Notices: Not Supported 00:09:04.970 PLE Aggregate Log Change Notices: Not Supported 00:09:04.970 LBA Status Info Alert Notices: Not Supported 00:09:04.970 EGE Aggregate Log Change Notices: Not Supported 00:09:04.970 Normal NVM Subsystem Shutdown event: Not Supported 00:09:04.970 Zone Descriptor Change Notices: Not Supported 00:09:04.970 Discovery Log Change Notices: Not Supported 00:09:04.970 Controller Attributes 00:09:04.970 128-bit Host Identifier: Not Supported 00:09:04.970 Non-Operational Permissive Mode: Not Supported 00:09:04.970 NVM Sets: Not Supported 00:09:04.970 Read Recovery Levels: Not Supported 00:09:04.970 Endurance Groups: Supported 00:09:04.970 Predictable Latency Mode: Not Supported 00:09:04.970 Traffic Based Keep ALive: Not Supported 00:09:04.970 Namespace Granularity: Not Supported 00:09:04.970 SQ Associations: Not Supported 00:09:04.970 UUID List: Not Supported 00:09:04.970 Multi-Domain Subsystem: Not Supported 00:09:04.970 Fixed Capacity Management: Not Supported 00:09:04.970 Variable Capacity Management: Not Supported 00:09:04.970 Delete Endurance Group: Not Supported 00:09:04.970 Delete NVM Set: Not Supported 00:09:04.970 Extended LBA Formats Supported: Supported 00:09:04.970 Flexible Data Placement Supported: Supported 00:09:04.970 00:09:04.970 Controller Memory Buffer Support 00:09:04.970 ================================ 00:09:04.970 Supported: No 00:09:04.970 00:09:04.970 Persistent Memory Region Support 00:09:04.970 ================================ 00:09:04.970 Supported: No 00:09:04.970 00:09:04.970 Admin Command Set Attributes 00:09:04.970 ============================ 00:09:04.970 Security Send/Receive: Not Supported 00:09:04.970 Format NVM: Supported 00:09:04.970 Firmware Activate/Download: Not Supported 00:09:04.970 Namespace Management: Supported 00:09:04.970 Device Self-Test: Not Supported 00:09:04.970 Directives: Supported 00:09:04.970 NVMe-MI: Not Supported 00:09:04.970 Virtualization Management: Not Supported 00:09:04.970 Doorbell Buffer Config: Supported 00:09:04.970 Get LBA Status Capability: Not Supported 00:09:04.970 Command & Feature Lockdown Capability: Not Supported 00:09:04.970 Abort Command Limit: 4 00:09:04.970 Async Event Request Limit: 4 00:09:04.970 Number of Firmware Slots: N/A 00:09:04.970 Firmware Slot 1 Read-Only: N/A 00:09:04.970 Firmware Activation Without Reset: N/A 00:09:04.970 Multiple Update Detection Support: N/A 00:09:04.970 Firmware Update Granularity: No Information Provided 00:09:04.970 Per-Namespace SMART Log: Yes 00:09:04.970 Asymmetric Namespace Access Log Page: Not Supported 00:09:04.970 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:09:04.970 Command Effects Log Page: Supported 00:09:04.970 Get Log Page Extended Data: Supported 00:09:04.970 Telemetry Log Pages: Not Supported 00:09:04.970 Persistent Event Log Pages: Not Supported 00:09:04.970 Supported Log Pages Log Page: May Support 00:09:04.970 Commands Supported & Effects Log Page: Not Supported 00:09:04.970 Feature Identifiers & Effects Log Page:May Support 00:09:04.970 NVMe-MI Commands & Effects Log Page: May Support 00:09:04.970 Data Area 4 for Telemetry Log: Not Supported 00:09:04.970 Error Log Page Entries Supported: 1 00:09:04.970 Keep Alive: Not Supported 00:09:04.970 00:09:04.970 NVM Command Set Attributes 00:09:04.970 ========================== 00:09:04.970 Submission Queue Entry Size 00:09:04.970 Max: 64 00:09:04.970 Min: 64 00:09:04.970 Completion Queue Entry Size 00:09:04.970 Max: 16 00:09:04.970 Min: 16 00:09:04.970 Number of Namespaces: 256 00:09:04.970 Compare Command: Supported 00:09:04.970 Write Uncorrectable Command: Not Supported 00:09:04.970 Dataset Management Command: Supported 00:09:04.970 Write Zeroes Command: Supported 00:09:04.970 Set Features Save Field: Supported 00:09:04.970 Reservations: Not Supported 00:09:04.970 Timestamp: Supported 00:09:04.970 Copy: Supported 00:09:04.970 Volatile Write Cache: Present 00:09:04.970 Atomic Write Unit (Normal): 1 00:09:04.970 Atomic Write Unit (PFail): 1 00:09:04.970 Atomic Compare & Write Unit: 1 00:09:04.970 Fused Compare & Write: Not Supported 00:09:04.970 Scatter-Gather List 00:09:04.970 SGL Command Set: Supported 00:09:04.970 SGL Keyed: Not Supported 00:09:04.970 SGL Bit Bucket Descriptor: Not Supported 00:09:04.970 SGL Metadata Pointer: Not Supported 00:09:04.970 Oversized SGL: Not Supported 00:09:04.970 SGL Metadata Address: Not Supported 00:09:04.970 SGL Offset: Not Supported 00:09:04.970 Transport SGL Data Block: Not Supported 00:09:04.970 Replay Protected Memory Block: Not Supported 00:09:04.970 00:09:04.970 Firmware Slot Information 00:09:04.970 ========================= 00:09:04.970 Active slot: 1 00:09:04.970 Slot 1 Firmware Revision: 1.0 00:09:04.970 00:09:04.970 00:09:04.970 Commands Supported and Effects 00:09:04.970 ============================== 00:09:04.970 Admin Commands 00:09:04.970 -------------- 00:09:04.970 Delete I/O Submission Queue (00h): Supported 00:09:04.971 Create I/O Submission Queue (01h): Supported 00:09:04.971 Get Log Page (02h): Supported 00:09:04.971 Delete I/O Completion Queue (04h): Supported 00:09:04.971 Create I/O Completion Queue (05h): Supported 00:09:04.971 Identify (06h): Supported 00:09:04.971 Abort (08h): Supported 00:09:04.971 Set Features (09h): Supported 00:09:04.971 Get Features (0Ah): Supported 00:09:04.971 Asynchronous Event Request (0Ch): Supported 00:09:04.971 Namespace Attachment (15h): Supported NS-Inventory-Change 00:09:04.971 Directive Send (19h): Supported 00:09:04.971 Directive Receive (1Ah): Supported 00:09:04.971 Virtualization Management (1Ch): Supported 00:09:04.971 Doorbell Buffer Config (7Ch): Supported 00:09:04.971 Format NVM (80h): Supported LBA-Change 00:09:04.971 I/O Commands 00:09:04.971 ------------ 00:09:04.971 Flush (00h): Supported LBA-Change 00:09:04.971 Write (01h): Supported LBA-Change 00:09:04.971 Read (02h): Supported 00:09:04.971 Compare (05h): Supported 00:09:04.971 Write Zeroes (08h): Supported LBA-Change 00:09:04.971 Dataset Management (09h): Supported LBA-Change 00:09:04.971 Unknown (0Ch): Supported 00:09:04.971 Unknown (12h): Supported 00:09:04.971 Copy (19h): Supported LBA-Change 00:09:04.971 Unknown (1Dh): Supported LBA-Change 00:09:04.971 00:09:04.971 Error Log 00:09:04.971 ========= 00:09:04.971 00:09:04.971 Arbitration 00:09:04.971 =========== 00:09:04.971 Arbitration Burst: no limit 00:09:04.971 00:09:04.971 Power Management 00:09:04.971 ================ 00:09:04.971 Number of Power States: 1 00:09:04.971 Current Power State: Power State #0 00:09:04.971 Power State #0: 00:09:04.971 Max Power: 25.00 W 00:09:04.971 Non-Operational State: Operational 00:09:04.971 Entry Latency: 16 microseconds 00:09:04.971 Exit Latency: 4 microseconds 00:09:04.971 Relative Read Throughput: 0 00:09:04.971 Relative Read Latency: 0 00:09:04.971 Relative Write Throughput: 0 00:09:04.971 Relative Write Latency: 0 00:09:04.971 Idle Power: Not Reported 00:09:04.971 Active Power: Not Reported 00:09:04.971 Non-Operational Permissive Mode: Not Supported 00:09:04.971 00:09:04.971 Health Information 00:09:04.971 ================== 00:09:04.971 Critical Warnings: 00:09:04.971 Available Spare Space: OK 00:09:04.971 Temperature: OK 00:09:04.971 Device Reliability: OK 00:09:04.971 Read Only: No 00:09:04.971 Volatile Memory Backup: OK 00:09:04.971 Current Temperature: 323 Kelvin (50 Celsius) 00:09:04.971 Temperature Threshold: 343 Kelvin (70 Celsius) 00:09:04.971 Available Spare: 0% 00:09:04.971 Available Spare Threshold: 0% 00:09:04.971 Life Percentage Used: 0% 00:09:04.971 Data Units Read: 931 00:09:04.971 Data Units Written: 825 00:09:04.971 Host Read Commands: 39588 00:09:04.971 Host Write Commands: 38178 00:09:04.971 Controller Busy Time: 0 minutes 00:09:04.971 Power Cycles: 0 00:09:04.971 Power On Hours: 0 hours 00:09:04.971 Unsafe Shutdowns: 0 00:09:04.971 Unrecoverable Media Errors: 0 00:09:04.971 Lifetime Error Log Entries: 0 00:09:04.971 Warning Temperature Time: 0 minutes 00:09:04.971 Critical Temperature Time: 0 minutes 00:09:04.971 00:09:04.971 Number of Queues 00:09:04.971 ================ 00:09:04.971 Number of I/O Submission Queues: 64 00:09:04.971 Number of I/O Completion Queues: 64 00:09:04.971 00:09:04.971 ZNS Specific Controller Data 00:09:04.971 ============================ 00:09:04.971 Zone Append Size Limit: 0 00:09:04.971 00:09:04.971 00:09:04.971 Active Namespaces 00:09:04.971 ================= 00:09:04.971 Namespace ID:1 00:09:04.971 Error Recovery Timeout: Unlimited 00:09:04.971 Command Set Identifier: NVM (00h) 00:09:04.971 Deallocate: Supported 00:09:04.971 Deallocated/Unwritten Error: Supported 00:09:04.971 Deallocated Read Value: All 0x00 00:09:04.971 Deallocate in Write Zeroes: Not Supported 00:09:04.971 Deallocated Guard Field: 0xFFFF 00:09:04.971 Flush: Supported 00:09:04.971 Reservation: Not Supported 00:09:04.971 Namespace Sharing Capabilities: Multiple Controllers 00:09:04.971 Size (in LBAs): 262144 (1GiB) 00:09:04.971 Capacity (in LBAs): 262144 (1GiB) 00:09:04.971 Utilization (in LBAs): 262144 (1GiB) 00:09:04.971 Thin Provisioning: Not Supported 00:09:04.971 Per-NS Atomic Units: No 00:09:04.971 Maximum Single Source Range Length: 128 00:09:04.971 Maximum Copy Length: 128 00:09:04.971 Maximum Source Range Count: 128 00:09:04.971 NGUID/EUI64 Never Reused: No 00:09:04.971 Namespace Write Protected: No 00:09:04.971 Endurance group ID: 1 00:09:04.971 Number of LBA Formats: 8 00:09:04.971 Current LBA Format: LBA Format #04 00:09:04.971 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:04.971 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:04.971 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:04.971 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:04.971 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:04.971 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:04.971 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:04.971 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:04.971 00:09:04.971 Get Feature FDP: 00:09:04.971 ================ 00:09:04.971 Enabled: Yes 00:09:04.971 FDP configuration index: 0 00:09:04.971 00:09:04.971 FDP configurations log page 00:09:04.971 =========================== 00:09:04.971 Number of FDP configurations: 1 00:09:04.971 Version: 0 00:09:04.971 Size: 112 00:09:04.971 FDP Configuration Descriptor: 0 00:09:04.971 Descriptor Size: 96 00:09:04.971 Reclaim Group Identifier format: 2 00:09:04.971 FDP Volatile Write Cache: Not Present 00:09:04.971 FDP Configuration: Valid 00:09:04.971 Vendor Specific Size: 0 00:09:04.971 Number of Reclaim Groups: 2 00:09:04.971 Number of Recalim Unit Handles: 8 00:09:04.971 Max Placement Identifiers: 128 00:09:04.971 Number of Namespaces Suppprted: 256 00:09:04.971 Reclaim unit Nominal Size: 6000000 bytes 00:09:04.971 Estimated Reclaim Unit Time Limit: Not Reported 00:09:04.971 RUH Desc #000: RUH Type: Initially Isolated 00:09:04.971 RUH Desc #001: RUH Type: Initially Isolated 00:09:04.971 RUH Desc #002: RUH Type: Initially Isolated 00:09:04.971 RUH Desc #003: RUH Type: Initially Isolated 00:09:04.971 RUH Desc #004: RUH Type: Initially Isolated 00:09:04.971 RUH Desc #005: RUH Type: Initially Isolated 00:09:04.971 RUH Desc #006: RUH Type: Initially Isolated 00:09:04.971 RUH Desc #007: RUH Type: Initially Isolated 00:09:04.971 00:09:04.971 FDP reclaim unit handle usage log page 00:09:05.231 ====================================== 00:09:05.231 Number of Reclaim Unit Handles: 8 00:09:05.231 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:09:05.231 RUH Usage Desc #001: RUH Attributes: Unused 00:09:05.231 RUH Usage Desc #002: RUH Attributes: Unused 00:09:05.231 RUH Usage Desc #003: RUH Attributes: Unused 00:09:05.231 RUH Usage Desc #004: RUH Attributes: Unused 00:09:05.231 RUH Usage Desc #005: RUH Attributes: Unused 00:09:05.231 RUH Usage Desc #006: RUH Attributes: Unused 00:09:05.231 RUH Usage Desc #007: RUH Attributes: Unused 00:09:05.231 00:09:05.231 FDP statistics log page 00:09:05.231 ======================= 00:09:05.231 Host bytes with metadata written: 532127744 00:09:05.231 Media bytes with metadata written: 532185088 00:09:05.231 Media bytes erased: 0 00:09:05.231 00:09:05.231 FDP events log page 00:09:05.231 =================== 00:09:05.231 Number of FDP events: 0 00:09:05.231 00:09:05.231 00:09:05.231 real 0m1.410s 00:09:05.231 user 0m0.479s 00:09:05.231 sys 0m0.719s 00:09:05.231 00:13:19 nvme.nvme_identify -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:05.231 00:13:19 nvme.nvme_identify -- common/autotest_common.sh@10 -- # set +x 00:09:05.231 ************************************ 00:09:05.231 END TEST nvme_identify 00:09:05.231 ************************************ 00:09:05.231 00:13:19 nvme -- nvme/nvme.sh@86 -- # run_test nvme_perf nvme_perf 00:09:05.231 00:13:19 nvme -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:09:05.231 00:13:19 nvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:05.231 00:13:19 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:05.231 ************************************ 00:09:05.231 START TEST nvme_perf 00:09:05.231 ************************************ 00:09:05.231 00:13:19 nvme.nvme_perf -- common/autotest_common.sh@1121 -- # nvme_perf 00:09:05.231 00:13:19 nvme.nvme_perf -- nvme/nvme.sh@22 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w read -o 12288 -t 1 -LL -i 0 -N 00:09:06.610 Initializing NVMe Controllers 00:09:06.610 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:09:06.610 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:09:06.610 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:09:06.610 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:09:06.610 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:09:06.610 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:09:06.610 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:09:06.610 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:09:06.610 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:09:06.610 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:09:06.610 Initialization complete. Launching workers. 00:09:06.610 ======================================================== 00:09:06.610 Latency(us) 00:09:06.610 Device Information : IOPS MiB/s Average min max 00:09:06.610 PCIE (0000:00:10.0) NSID 1 from core 0: 13926.74 163.20 9193.99 5885.66 38968.23 00:09:06.610 PCIE (0000:00:11.0) NSID 1 from core 0: 13926.74 163.20 9187.41 5657.21 38374.53 00:09:06.610 PCIE (0000:00:13.0) NSID 1 from core 0: 13926.74 163.20 9178.84 4862.75 38399.23 00:09:06.610 PCIE (0000:00:12.0) NSID 1 from core 0: 13926.74 163.20 9169.99 4440.90 37921.67 00:09:06.610 PCIE (0000:00:12.0) NSID 2 from core 0: 13926.74 163.20 9161.28 4067.20 37382.19 00:09:06.610 PCIE (0000:00:12.0) NSID 3 from core 0: 13990.62 163.95 9110.65 3599.13 31919.09 00:09:06.610 ======================================================== 00:09:06.610 Total : 83624.31 979.97 9166.98 3599.13 38968.23 00:09:06.610 00:09:06.610 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:09:06.610 ================================================================================= 00:09:06.610 1.00000% : 8106.461us 00:09:06.610 10.00000% : 8369.658us 00:09:06.610 25.00000% : 8527.576us 00:09:06.610 50.00000% : 8843.412us 00:09:06.610 75.00000% : 9106.609us 00:09:06.610 90.00000% : 9422.445us 00:09:06.610 95.00000% : 10527.871us 00:09:06.610 98.00000% : 14002.069us 00:09:06.610 99.00000% : 15160.135us 00:09:06.610 99.50000% : 32636.402us 00:09:06.610 99.90000% : 38742.567us 00:09:06.610 99.99000% : 38953.124us 00:09:06.610 99.99900% : 39163.682us 00:09:06.610 99.99990% : 39163.682us 00:09:06.610 99.99999% : 39163.682us 00:09:06.610 00:09:06.610 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:09:06.610 ================================================================================= 00:09:06.610 1.00000% : 8159.100us 00:09:06.610 10.00000% : 8369.658us 00:09:06.610 25.00000% : 8580.215us 00:09:06.610 50.00000% : 8843.412us 00:09:06.610 75.00000% : 9106.609us 00:09:06.610 90.00000% : 9369.806us 00:09:06.610 95.00000% : 10422.593us 00:09:06.610 98.00000% : 13791.512us 00:09:06.610 99.00000% : 15475.971us 00:09:06.610 99.50000% : 32425.844us 00:09:06.610 99.90000% : 38321.452us 00:09:06.610 99.99000% : 38532.010us 00:09:06.610 99.99900% : 38532.010us 00:09:06.610 99.99990% : 38532.010us 00:09:06.610 99.99999% : 38532.010us 00:09:06.610 00:09:06.610 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:09:06.610 ================================================================================= 00:09:06.610 1.00000% : 8106.461us 00:09:06.610 10.00000% : 8369.658us 00:09:06.610 25.00000% : 8580.215us 00:09:06.610 50.00000% : 8843.412us 00:09:06.610 75.00000% : 9053.969us 00:09:06.610 90.00000% : 9369.806us 00:09:06.610 95.00000% : 10264.675us 00:09:06.610 98.00000% : 13686.233us 00:09:06.610 99.00000% : 15686.529us 00:09:06.610 99.50000% : 32846.959us 00:09:06.610 99.90000% : 38321.452us 00:09:06.610 99.99000% : 38532.010us 00:09:06.610 99.99900% : 38532.010us 00:09:06.610 99.99990% : 38532.010us 00:09:06.610 99.99999% : 38532.010us 00:09:06.610 00:09:06.610 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:09:06.610 ================================================================================= 00:09:06.610 1.00000% : 8053.822us 00:09:06.610 10.00000% : 8422.297us 00:09:06.610 25.00000% : 8580.215us 00:09:06.610 50.00000% : 8843.412us 00:09:06.610 75.00000% : 9053.969us 00:09:06.610 90.00000% : 9369.806us 00:09:06.610 95.00000% : 10264.675us 00:09:06.610 98.00000% : 13475.676us 00:09:06.610 99.00000% : 15897.086us 00:09:06.610 99.50000% : 32425.844us 00:09:06.610 99.90000% : 37900.337us 00:09:06.610 99.99000% : 38110.895us 00:09:06.610 99.99900% : 38110.895us 00:09:06.610 99.99990% : 38110.895us 00:09:06.610 99.99999% : 38110.895us 00:09:06.610 00:09:06.610 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:09:06.610 ================================================================================= 00:09:06.610 1.00000% : 8053.822us 00:09:06.610 10.00000% : 8369.658us 00:09:06.610 25.00000% : 8580.215us 00:09:06.610 50.00000% : 8843.412us 00:09:06.610 75.00000% : 9106.609us 00:09:06.610 90.00000% : 9369.806us 00:09:06.610 95.00000% : 10106.757us 00:09:06.610 98.00000% : 13686.233us 00:09:06.610 99.00000% : 15791.807us 00:09:06.610 99.50000% : 31794.172us 00:09:06.610 99.90000% : 37268.665us 00:09:06.610 99.99000% : 37479.222us 00:09:06.610 99.99900% : 37479.222us 00:09:06.610 99.99990% : 37479.222us 00:09:06.610 99.99999% : 37479.222us 00:09:06.610 00:09:06.610 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:09:06.610 ================================================================================= 00:09:06.610 1.00000% : 8053.822us 00:09:06.610 10.00000% : 8369.658us 00:09:06.610 25.00000% : 8580.215us 00:09:06.610 50.00000% : 8843.412us 00:09:06.610 75.00000% : 9106.609us 00:09:06.610 90.00000% : 9369.806us 00:09:06.610 95.00000% : 10317.314us 00:09:06.610 98.00000% : 13896.790us 00:09:06.610 99.00000% : 15265.414us 00:09:06.610 99.50000% : 25688.006us 00:09:06.610 99.90000% : 31794.172us 00:09:06.610 99.99000% : 32004.729us 00:09:06.610 99.99900% : 32004.729us 00:09:06.610 99.99990% : 32004.729us 00:09:06.610 99.99999% : 32004.729us 00:09:06.610 00:09:06.610 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:09:06.610 ============================================================================== 00:09:06.610 Range in us Cumulative IO count 00:09:06.610 5869.288 - 5895.608: 0.0215% ( 3) 00:09:06.610 5895.608 - 5921.928: 0.0358% ( 2) 00:09:06.610 5921.928 - 5948.247: 0.0430% ( 1) 00:09:06.610 5948.247 - 5974.567: 0.0645% ( 3) 00:09:06.610 5974.567 - 6000.887: 0.0788% ( 2) 00:09:06.610 6000.887 - 6027.206: 0.0932% ( 2) 00:09:06.610 6027.206 - 6053.526: 0.1075% ( 2) 00:09:06.610 6053.526 - 6079.846: 0.1218% ( 2) 00:09:06.610 6079.846 - 6106.165: 0.1362% ( 2) 00:09:06.611 6106.165 - 6132.485: 0.1505% ( 2) 00:09:06.611 6132.485 - 6158.805: 0.1720% ( 3) 00:09:06.611 6158.805 - 6185.124: 0.1792% ( 1) 00:09:06.611 6185.124 - 6211.444: 0.1935% ( 2) 00:09:06.611 6211.444 - 6237.764: 0.2079% ( 2) 00:09:06.611 6237.764 - 6264.084: 0.2222% ( 2) 00:09:06.611 6264.084 - 6290.403: 0.2365% ( 2) 00:09:06.611 6290.403 - 6316.723: 0.2509% ( 2) 00:09:06.611 6316.723 - 6343.043: 0.2652% ( 2) 00:09:06.611 6343.043 - 6369.362: 0.2867% ( 3) 00:09:06.611 6369.362 - 6395.682: 0.2939% ( 1) 00:09:06.611 6395.682 - 6422.002: 0.3010% ( 1) 00:09:06.611 6422.002 - 6448.321: 0.3225% ( 3) 00:09:06.611 6448.321 - 6474.641: 0.3369% ( 2) 00:09:06.611 6474.641 - 6500.961: 0.3512% ( 2) 00:09:06.611 6500.961 - 6527.280: 0.3655% ( 2) 00:09:06.611 6527.280 - 6553.600: 0.3727% ( 1) 00:09:06.611 6553.600 - 6579.920: 0.4014% ( 4) 00:09:06.611 6606.239 - 6632.559: 0.4229% ( 3) 00:09:06.611 6632.559 - 6658.879: 0.4300% ( 1) 00:09:06.611 6658.879 - 6685.198: 0.4515% ( 3) 00:09:06.611 6685.198 - 6711.518: 0.4587% ( 1) 00:09:06.611 7895.904 - 7948.543: 0.5232% ( 9) 00:09:06.611 7948.543 - 8001.182: 0.6666% ( 20) 00:09:06.611 8001.182 - 8053.822: 0.9246% ( 36) 00:09:06.611 8053.822 - 8106.461: 1.5768% ( 91) 00:09:06.611 8106.461 - 8159.100: 2.6950% ( 156) 00:09:06.611 8159.100 - 8211.740: 4.5011% ( 252) 00:09:06.611 8211.740 - 8264.379: 6.9166% ( 337) 00:09:06.611 8264.379 - 8317.018: 9.9269% ( 420) 00:09:06.611 8317.018 - 8369.658: 13.2956% ( 470) 00:09:06.611 8369.658 - 8422.297: 17.0943% ( 530) 00:09:06.611 8422.297 - 8474.937: 21.0866% ( 557) 00:09:06.611 8474.937 - 8527.576: 25.2437% ( 580) 00:09:06.611 8527.576 - 8580.215: 29.6087% ( 609) 00:09:06.611 8580.215 - 8632.855: 34.0668% ( 622) 00:09:06.611 8632.855 - 8685.494: 38.7185% ( 649) 00:09:06.611 8685.494 - 8738.133: 43.3558% ( 647) 00:09:06.611 8738.133 - 8790.773: 48.0146% ( 650) 00:09:06.611 8790.773 - 8843.412: 52.8168% ( 670) 00:09:06.611 8843.412 - 8896.051: 57.6763% ( 678) 00:09:06.611 8896.051 - 8948.691: 62.5287% ( 677) 00:09:06.611 8948.691 - 9001.330: 67.2377% ( 657) 00:09:06.611 9001.330 - 9053.969: 71.8463% ( 643) 00:09:06.611 9053.969 - 9106.609: 75.9819% ( 577) 00:09:06.611 9106.609 - 9159.248: 79.8237% ( 536) 00:09:06.611 9159.248 - 9211.888: 82.8268% ( 419) 00:09:06.611 9211.888 - 9264.527: 85.4071% ( 360) 00:09:06.611 9264.527 - 9317.166: 87.5573% ( 300) 00:09:06.611 9317.166 - 9369.806: 89.0840% ( 213) 00:09:06.611 9369.806 - 9422.445: 90.3526% ( 177) 00:09:06.611 9422.445 - 9475.084: 91.4062% ( 147) 00:09:06.611 9475.084 - 9527.724: 92.2950% ( 124) 00:09:06.611 9527.724 - 9580.363: 92.8684% ( 80) 00:09:06.611 9580.363 - 9633.002: 93.3056% ( 61) 00:09:06.611 9633.002 - 9685.642: 93.6855% ( 53) 00:09:06.611 9685.642 - 9738.281: 93.9364% ( 35) 00:09:06.611 9738.281 - 9790.920: 94.1370% ( 28) 00:09:06.611 9790.920 - 9843.560: 94.2947% ( 22) 00:09:06.611 9843.560 - 9896.199: 94.3736% ( 11) 00:09:06.611 9896.199 - 9948.839: 94.4667% ( 13) 00:09:06.611 9948.839 - 10001.478: 94.5312% ( 9) 00:09:06.611 10001.478 - 10054.117: 94.5743% ( 6) 00:09:06.611 10054.117 - 10106.757: 94.6388% ( 9) 00:09:06.611 10106.757 - 10159.396: 94.6889% ( 7) 00:09:06.611 10159.396 - 10212.035: 94.7463% ( 8) 00:09:06.611 10212.035 - 10264.675: 94.7821% ( 5) 00:09:06.611 10264.675 - 10317.314: 94.8323% ( 7) 00:09:06.611 10317.314 - 10369.953: 94.8753% ( 6) 00:09:06.611 10369.953 - 10422.593: 94.9183% ( 6) 00:09:06.611 10422.593 - 10475.232: 94.9756% ( 8) 00:09:06.611 10475.232 - 10527.871: 95.0258% ( 7) 00:09:06.611 10527.871 - 10580.511: 95.0760% ( 7) 00:09:06.611 10580.511 - 10633.150: 95.1190% ( 6) 00:09:06.611 10633.150 - 10685.790: 95.1763% ( 8) 00:09:06.611 10685.790 - 10738.429: 95.2265% ( 7) 00:09:06.611 10738.429 - 10791.068: 95.2910% ( 9) 00:09:06.611 10791.068 - 10843.708: 95.3483% ( 8) 00:09:06.611 10843.708 - 10896.347: 95.4057% ( 8) 00:09:06.611 10896.347 - 10948.986: 95.4702% ( 9) 00:09:06.611 10948.986 - 11001.626: 95.5060% ( 5) 00:09:06.611 11001.626 - 11054.265: 95.5562% ( 7) 00:09:06.611 11054.265 - 11106.904: 95.5992% ( 6) 00:09:06.611 11106.904 - 11159.544: 95.6494% ( 7) 00:09:06.611 11159.544 - 11212.183: 95.7067% ( 8) 00:09:06.611 11212.183 - 11264.822: 95.7425% ( 5) 00:09:06.611 11264.822 - 11317.462: 95.7999% ( 8) 00:09:06.611 11317.462 - 11370.101: 95.8286% ( 4) 00:09:06.611 11370.101 - 11422.741: 95.8501% ( 3) 00:09:06.611 11422.741 - 11475.380: 95.8644% ( 2) 00:09:06.611 11475.380 - 11528.019: 95.9002% ( 5) 00:09:06.611 11528.019 - 11580.659: 95.9146% ( 2) 00:09:06.611 11580.659 - 11633.298: 95.9504% ( 5) 00:09:06.611 11633.298 - 11685.937: 95.9719% ( 3) 00:09:06.611 11685.937 - 11738.577: 95.9862% ( 2) 00:09:06.611 11738.577 - 11791.216: 96.0221% ( 5) 00:09:06.611 11791.216 - 11843.855: 96.0364% ( 2) 00:09:06.611 11843.855 - 11896.495: 96.0722% ( 5) 00:09:06.611 11896.495 - 11949.134: 96.0866% ( 2) 00:09:06.611 11949.134 - 12001.773: 96.1153% ( 4) 00:09:06.611 12001.773 - 12054.413: 96.1439% ( 4) 00:09:06.611 12054.413 - 12107.052: 96.1726% ( 4) 00:09:06.611 12107.052 - 12159.692: 96.2013% ( 4) 00:09:06.611 12159.692 - 12212.331: 96.2299% ( 4) 00:09:06.611 12212.331 - 12264.970: 96.2729% ( 6) 00:09:06.611 12264.970 - 12317.610: 96.3446% ( 10) 00:09:06.611 12317.610 - 12370.249: 96.3948% ( 7) 00:09:06.611 12370.249 - 12422.888: 96.4450% ( 7) 00:09:06.611 12422.888 - 12475.528: 96.5095% ( 9) 00:09:06.611 12475.528 - 12528.167: 96.5596% ( 7) 00:09:06.611 12528.167 - 12580.806: 96.6170% ( 8) 00:09:06.611 12580.806 - 12633.446: 96.6743% ( 8) 00:09:06.611 12633.446 - 12686.085: 96.7388% ( 9) 00:09:06.611 12686.085 - 12738.724: 96.7818% ( 6) 00:09:06.611 12738.724 - 12791.364: 96.8392% ( 8) 00:09:06.611 12791.364 - 12844.003: 96.9037% ( 9) 00:09:06.611 12844.003 - 12896.643: 96.9610% ( 8) 00:09:06.611 12896.643 - 12949.282: 97.0112% ( 7) 00:09:06.611 12949.282 - 13001.921: 97.0542% ( 6) 00:09:06.611 13001.921 - 13054.561: 97.1044% ( 7) 00:09:06.611 13054.561 - 13107.200: 97.1545% ( 7) 00:09:06.611 13107.200 - 13159.839: 97.2262% ( 10) 00:09:06.611 13159.839 - 13212.479: 97.2692% ( 6) 00:09:06.611 13212.479 - 13265.118: 97.3050% ( 5) 00:09:06.611 13265.118 - 13317.757: 97.3265% ( 3) 00:09:06.611 13317.757 - 13370.397: 97.3696% ( 6) 00:09:06.611 13370.397 - 13423.036: 97.4054% ( 5) 00:09:06.611 13423.036 - 13475.676: 97.4627% ( 8) 00:09:06.611 13475.676 - 13580.954: 97.5702% ( 15) 00:09:06.611 13580.954 - 13686.233: 97.7064% ( 19) 00:09:06.611 13686.233 - 13791.512: 97.8426% ( 19) 00:09:06.611 13791.512 - 13896.790: 97.9931% ( 21) 00:09:06.611 13896.790 - 14002.069: 98.1436% ( 21) 00:09:06.611 14002.069 - 14107.348: 98.2583% ( 16) 00:09:06.611 14107.348 - 14212.627: 98.4017% ( 20) 00:09:06.611 14212.627 - 14317.905: 98.5522% ( 21) 00:09:06.611 14317.905 - 14423.184: 98.6525% ( 14) 00:09:06.611 14423.184 - 14528.463: 98.7529% ( 14) 00:09:06.611 14528.463 - 14633.741: 98.8174% ( 9) 00:09:06.611 14633.741 - 14739.020: 98.8532% ( 5) 00:09:06.611 14739.020 - 14844.299: 98.9106% ( 8) 00:09:06.611 14844.299 - 14949.578: 98.9536% ( 6) 00:09:06.611 14949.578 - 15054.856: 98.9894% ( 5) 00:09:06.611 15054.856 - 15160.135: 99.0539% ( 9) 00:09:06.611 15160.135 - 15265.414: 99.0754% ( 3) 00:09:06.611 15265.414 - 15370.692: 99.0826% ( 1) 00:09:06.611 31373.057 - 31583.614: 99.1399% ( 8) 00:09:06.611 31583.614 - 31794.172: 99.2331% ( 13) 00:09:06.611 31794.172 - 32004.729: 99.3119% ( 11) 00:09:06.611 32004.729 - 32215.287: 99.3979% ( 12) 00:09:06.611 32215.287 - 32425.844: 99.4839% ( 12) 00:09:06.611 32425.844 - 32636.402: 99.5413% ( 8) 00:09:06.611 37689.780 - 37900.337: 99.5915% ( 7) 00:09:06.611 37900.337 - 38110.895: 99.6560% ( 9) 00:09:06.611 38110.895 - 38321.452: 99.7420% ( 12) 00:09:06.611 38321.452 - 38532.010: 99.8280% ( 12) 00:09:06.611 38532.010 - 38742.567: 99.9140% ( 12) 00:09:06.611 38742.567 - 38953.124: 99.9928% ( 11) 00:09:06.611 38953.124 - 39163.682: 100.0000% ( 1) 00:09:06.611 00:09:06.611 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:09:06.611 ============================================================================== 00:09:06.611 Range in us Cumulative IO count 00:09:06.611 5632.411 - 5658.731: 0.0072% ( 1) 00:09:06.611 5658.731 - 5685.051: 0.0215% ( 2) 00:09:06.611 5685.051 - 5711.370: 0.0358% ( 2) 00:09:06.611 5711.370 - 5737.690: 0.0502% ( 2) 00:09:06.611 5737.690 - 5764.010: 0.0717% ( 3) 00:09:06.611 5764.010 - 5790.329: 0.0860% ( 2) 00:09:06.611 5790.329 - 5816.649: 0.1075% ( 3) 00:09:06.611 5816.649 - 5842.969: 0.1290% ( 3) 00:09:06.611 5842.969 - 5869.288: 0.1433% ( 2) 00:09:06.611 5869.288 - 5895.608: 0.1577% ( 2) 00:09:06.611 5895.608 - 5921.928: 0.1792% ( 3) 00:09:06.611 5921.928 - 5948.247: 0.1935% ( 2) 00:09:06.611 5948.247 - 5974.567: 0.2150% ( 3) 00:09:06.611 5974.567 - 6000.887: 0.2294% ( 2) 00:09:06.611 6000.887 - 6027.206: 0.2437% ( 2) 00:09:06.611 6027.206 - 6053.526: 0.2652% ( 3) 00:09:06.611 6053.526 - 6079.846: 0.2795% ( 2) 00:09:06.611 6079.846 - 6106.165: 0.3010% ( 3) 00:09:06.611 6106.165 - 6132.485: 0.3225% ( 3) 00:09:06.612 6132.485 - 6158.805: 0.3369% ( 2) 00:09:06.612 6158.805 - 6185.124: 0.3584% ( 3) 00:09:06.612 6185.124 - 6211.444: 0.3727% ( 2) 00:09:06.612 6211.444 - 6237.764: 0.3942% ( 3) 00:09:06.612 6237.764 - 6264.084: 0.4085% ( 2) 00:09:06.612 6264.084 - 6290.403: 0.4300% ( 3) 00:09:06.612 6290.403 - 6316.723: 0.4444% ( 2) 00:09:06.612 6316.723 - 6343.043: 0.4587% ( 2) 00:09:06.612 7948.543 - 8001.182: 0.4874% ( 4) 00:09:06.612 8001.182 - 8053.822: 0.5877% ( 14) 00:09:06.612 8053.822 - 8106.461: 0.8243% ( 33) 00:09:06.612 8106.461 - 8159.100: 1.2185% ( 55) 00:09:06.612 8159.100 - 8211.740: 2.3796% ( 162) 00:09:06.612 8211.740 - 8264.379: 4.0998% ( 240) 00:09:06.612 8264.379 - 8317.018: 6.9166% ( 393) 00:09:06.612 8317.018 - 8369.658: 10.1132% ( 446) 00:09:06.612 8369.658 - 8422.297: 13.9980% ( 542) 00:09:06.612 8422.297 - 8474.937: 18.2124% ( 588) 00:09:06.612 8474.937 - 8527.576: 22.7136% ( 628) 00:09:06.612 8527.576 - 8580.215: 27.4799% ( 665) 00:09:06.612 8580.215 - 8632.855: 32.5258% ( 704) 00:09:06.612 8632.855 - 8685.494: 37.7509% ( 729) 00:09:06.612 8685.494 - 8738.133: 43.0834% ( 744) 00:09:06.612 8738.133 - 8790.773: 48.5808% ( 767) 00:09:06.612 8790.773 - 8843.412: 54.0926% ( 769) 00:09:06.612 8843.412 - 8896.051: 59.6044% ( 769) 00:09:06.612 8896.051 - 8948.691: 65.0659% ( 762) 00:09:06.612 8948.691 - 9001.330: 70.3412% ( 736) 00:09:06.612 9001.330 - 9053.969: 74.9283% ( 640) 00:09:06.612 9053.969 - 9106.609: 79.0209% ( 571) 00:09:06.612 9106.609 - 9159.248: 82.2176% ( 446) 00:09:06.612 9159.248 - 9211.888: 84.9197% ( 377) 00:09:06.612 9211.888 - 9264.527: 87.0628% ( 299) 00:09:06.612 9264.527 - 9317.166: 88.7973% ( 242) 00:09:06.612 9317.166 - 9369.806: 90.2093% ( 197) 00:09:06.612 9369.806 - 9422.445: 91.2772% ( 149) 00:09:06.612 9422.445 - 9475.084: 92.1660% ( 124) 00:09:06.612 9475.084 - 9527.724: 92.8182% ( 91) 00:09:06.612 9527.724 - 9580.363: 93.2985% ( 67) 00:09:06.612 9580.363 - 9633.002: 93.6640% ( 51) 00:09:06.612 9633.002 - 9685.642: 93.9435% ( 39) 00:09:06.612 9685.642 - 9738.281: 94.1872% ( 34) 00:09:06.612 9738.281 - 9790.920: 94.3306% ( 20) 00:09:06.612 9790.920 - 9843.560: 94.4452% ( 16) 00:09:06.612 9843.560 - 9896.199: 94.5312% ( 12) 00:09:06.612 9896.199 - 9948.839: 94.5958% ( 9) 00:09:06.612 9948.839 - 10001.478: 94.6459% ( 7) 00:09:06.612 10001.478 - 10054.117: 94.6818% ( 5) 00:09:06.612 10054.117 - 10106.757: 94.7104% ( 4) 00:09:06.612 10106.757 - 10159.396: 94.7463% ( 5) 00:09:06.612 10159.396 - 10212.035: 94.8036% ( 8) 00:09:06.612 10212.035 - 10264.675: 94.8538% ( 7) 00:09:06.612 10264.675 - 10317.314: 94.9255% ( 10) 00:09:06.612 10317.314 - 10369.953: 94.9828% ( 8) 00:09:06.612 10369.953 - 10422.593: 95.0401% ( 8) 00:09:06.612 10422.593 - 10475.232: 95.0975% ( 8) 00:09:06.612 10475.232 - 10527.871: 95.1476% ( 7) 00:09:06.612 10527.871 - 10580.511: 95.1978% ( 7) 00:09:06.612 10580.511 - 10633.150: 95.2552% ( 8) 00:09:06.612 10633.150 - 10685.790: 95.2982% ( 6) 00:09:06.612 10685.790 - 10738.429: 95.3412% ( 6) 00:09:06.612 10738.429 - 10791.068: 95.3770% ( 5) 00:09:06.612 10791.068 - 10843.708: 95.4128% ( 5) 00:09:06.612 10843.708 - 10896.347: 95.4487% ( 5) 00:09:06.612 10896.347 - 10948.986: 95.4845% ( 5) 00:09:06.612 10948.986 - 11001.626: 95.5275% ( 6) 00:09:06.612 11001.626 - 11054.265: 95.5419% ( 2) 00:09:06.612 11054.265 - 11106.904: 95.5705% ( 4) 00:09:06.612 11106.904 - 11159.544: 95.6064% ( 5) 00:09:06.612 11159.544 - 11212.183: 95.6279% ( 3) 00:09:06.612 11212.183 - 11264.822: 95.6565% ( 4) 00:09:06.612 11264.822 - 11317.462: 95.6852% ( 4) 00:09:06.612 11317.462 - 11370.101: 95.7067% ( 3) 00:09:06.612 11370.101 - 11422.741: 95.7425% ( 5) 00:09:06.612 11422.741 - 11475.380: 95.7712% ( 4) 00:09:06.612 11475.380 - 11528.019: 95.7999% ( 4) 00:09:06.612 11528.019 - 11580.659: 95.8357% ( 5) 00:09:06.612 11580.659 - 11633.298: 95.8644% ( 4) 00:09:06.612 11633.298 - 11685.937: 95.8787% ( 2) 00:09:06.612 11685.937 - 11738.577: 95.9074% ( 4) 00:09:06.612 11738.577 - 11791.216: 95.9361% ( 4) 00:09:06.612 11791.216 - 11843.855: 95.9647% ( 4) 00:09:06.612 11843.855 - 11896.495: 95.9862% ( 3) 00:09:06.612 11896.495 - 11949.134: 96.0149% ( 4) 00:09:06.612 11949.134 - 12001.773: 96.0436% ( 4) 00:09:06.612 12001.773 - 12054.413: 96.0722% ( 4) 00:09:06.612 12054.413 - 12107.052: 96.1009% ( 4) 00:09:06.612 12107.052 - 12159.692: 96.1296% ( 4) 00:09:06.612 12159.692 - 12212.331: 96.1511% ( 3) 00:09:06.612 12212.331 - 12264.970: 96.2228% ( 10) 00:09:06.612 12264.970 - 12317.610: 96.2658% ( 6) 00:09:06.612 12317.610 - 12370.249: 96.3159% ( 7) 00:09:06.612 12370.249 - 12422.888: 96.3804% ( 9) 00:09:06.612 12422.888 - 12475.528: 96.4378% ( 8) 00:09:06.612 12475.528 - 12528.167: 96.5095% ( 10) 00:09:06.612 12528.167 - 12580.806: 96.5740% ( 9) 00:09:06.612 12580.806 - 12633.446: 96.6600% ( 12) 00:09:06.612 12633.446 - 12686.085: 96.7603% ( 14) 00:09:06.612 12686.085 - 12738.724: 96.8463% ( 12) 00:09:06.612 12738.724 - 12791.364: 96.9467% ( 14) 00:09:06.612 12791.364 - 12844.003: 97.0183% ( 10) 00:09:06.612 12844.003 - 12896.643: 97.1115% ( 13) 00:09:06.612 12896.643 - 12949.282: 97.1975% ( 12) 00:09:06.612 12949.282 - 13001.921: 97.2907% ( 13) 00:09:06.612 13001.921 - 13054.561: 97.3696% ( 11) 00:09:06.612 13054.561 - 13107.200: 97.4341% ( 9) 00:09:06.612 13107.200 - 13159.839: 97.4842% ( 7) 00:09:06.612 13159.839 - 13212.479: 97.5416% ( 8) 00:09:06.612 13212.479 - 13265.118: 97.5774% ( 5) 00:09:06.612 13265.118 - 13317.757: 97.5989% ( 3) 00:09:06.612 13317.757 - 13370.397: 97.6347% ( 5) 00:09:06.612 13370.397 - 13423.036: 97.6562% ( 3) 00:09:06.612 13423.036 - 13475.676: 97.6921% ( 5) 00:09:06.612 13475.676 - 13580.954: 97.8068% ( 16) 00:09:06.612 13580.954 - 13686.233: 97.9071% ( 14) 00:09:06.612 13686.233 - 13791.512: 98.0218% ( 16) 00:09:06.612 13791.512 - 13896.790: 98.1293% ( 15) 00:09:06.612 13896.790 - 14002.069: 98.2440% ( 16) 00:09:06.612 14002.069 - 14107.348: 98.3587% ( 16) 00:09:06.612 14107.348 - 14212.627: 98.4518% ( 13) 00:09:06.612 14212.627 - 14317.905: 98.5522% ( 14) 00:09:06.612 14317.905 - 14423.184: 98.6167% ( 9) 00:09:06.612 14423.184 - 14528.463: 98.7242% ( 15) 00:09:06.612 14528.463 - 14633.741: 98.7529% ( 4) 00:09:06.612 14633.741 - 14739.020: 98.7815% ( 4) 00:09:06.612 14739.020 - 14844.299: 98.8174% ( 5) 00:09:06.612 14844.299 - 14949.578: 98.8460% ( 4) 00:09:06.612 14949.578 - 15054.856: 98.8819% ( 5) 00:09:06.612 15054.856 - 15160.135: 98.9106% ( 4) 00:09:06.612 15160.135 - 15265.414: 98.9464% ( 5) 00:09:06.612 15265.414 - 15370.692: 98.9751% ( 4) 00:09:06.612 15370.692 - 15475.971: 99.0037% ( 4) 00:09:06.612 15475.971 - 15581.250: 99.0324% ( 4) 00:09:06.612 15581.250 - 15686.529: 99.0682% ( 5) 00:09:06.612 15686.529 - 15791.807: 99.0826% ( 2) 00:09:06.612 31162.500 - 31373.057: 99.1184% ( 5) 00:09:06.612 31373.057 - 31583.614: 99.2188% ( 14) 00:09:06.612 31583.614 - 31794.172: 99.3119% ( 13) 00:09:06.612 31794.172 - 32004.729: 99.4051% ( 13) 00:09:06.612 32004.729 - 32215.287: 99.4983% ( 13) 00:09:06.612 32215.287 - 32425.844: 99.5413% ( 6) 00:09:06.612 37268.665 - 37479.222: 99.6130% ( 10) 00:09:06.612 37479.222 - 37689.780: 99.6990% ( 12) 00:09:06.612 37689.780 - 37900.337: 99.7778% ( 11) 00:09:06.612 37900.337 - 38110.895: 99.8782% ( 14) 00:09:06.612 38110.895 - 38321.452: 99.9713% ( 13) 00:09:06.612 38321.452 - 38532.010: 100.0000% ( 4) 00:09:06.612 00:09:06.612 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:09:06.612 ============================================================================== 00:09:06.612 Range in us Cumulative IO count 00:09:06.612 4842.821 - 4869.141: 0.0072% ( 1) 00:09:06.612 4869.141 - 4895.460: 0.0287% ( 3) 00:09:06.612 4895.460 - 4921.780: 0.0502% ( 3) 00:09:06.612 4921.780 - 4948.100: 0.0645% ( 2) 00:09:06.612 4948.100 - 4974.419: 0.0860% ( 3) 00:09:06.612 4974.419 - 5000.739: 0.1003% ( 2) 00:09:06.612 5000.739 - 5027.059: 0.1218% ( 3) 00:09:06.612 5027.059 - 5053.378: 0.1362% ( 2) 00:09:06.612 5053.378 - 5079.698: 0.1505% ( 2) 00:09:06.612 5079.698 - 5106.018: 0.1720% ( 3) 00:09:06.612 5106.018 - 5132.337: 0.1864% ( 2) 00:09:06.612 5132.337 - 5158.657: 0.2007% ( 2) 00:09:06.612 5158.657 - 5184.977: 0.2222% ( 3) 00:09:06.612 5184.977 - 5211.296: 0.2365% ( 2) 00:09:06.612 5211.296 - 5237.616: 0.2580% ( 3) 00:09:06.612 5237.616 - 5263.936: 0.2724% ( 2) 00:09:06.612 5263.936 - 5290.255: 0.2939% ( 3) 00:09:06.612 5290.255 - 5316.575: 0.3082% ( 2) 00:09:06.612 5316.575 - 5342.895: 0.3297% ( 3) 00:09:06.612 5342.895 - 5369.214: 0.3440% ( 2) 00:09:06.612 5369.214 - 5395.534: 0.3655% ( 3) 00:09:06.612 5395.534 - 5421.854: 0.3799% ( 2) 00:09:06.612 5421.854 - 5448.173: 0.4014% ( 3) 00:09:06.612 5448.173 - 5474.493: 0.4229% ( 3) 00:09:06.612 5474.493 - 5500.813: 0.4372% ( 2) 00:09:06.612 5500.813 - 5527.133: 0.4587% ( 3) 00:09:06.612 7316.871 - 7369.510: 0.4802% ( 3) 00:09:06.612 7369.510 - 7422.149: 0.4946% ( 2) 00:09:06.612 7422.149 - 7474.789: 0.5161% ( 3) 00:09:06.612 7474.789 - 7527.428: 0.5591% ( 6) 00:09:06.612 7527.428 - 7580.067: 0.5877% ( 4) 00:09:06.612 7580.067 - 7632.707: 0.6236% ( 5) 00:09:06.612 7632.707 - 7685.346: 0.6594% ( 5) 00:09:06.612 7685.346 - 7737.986: 0.6952% ( 5) 00:09:06.612 7737.986 - 7790.625: 0.7311% ( 5) 00:09:06.613 7790.625 - 7843.264: 0.7597% ( 4) 00:09:06.613 7843.264 - 7895.904: 0.7956% ( 5) 00:09:06.613 7895.904 - 7948.543: 0.8314% ( 5) 00:09:06.613 7948.543 - 8001.182: 0.8601% ( 4) 00:09:06.613 8001.182 - 8053.822: 0.9604% ( 14) 00:09:06.613 8053.822 - 8106.461: 1.1468% ( 26) 00:09:06.613 8106.461 - 8159.100: 1.5195% ( 52) 00:09:06.613 8159.100 - 8211.740: 2.4441% ( 129) 00:09:06.613 8211.740 - 8264.379: 4.4008% ( 273) 00:09:06.613 8264.379 - 8317.018: 7.0528% ( 370) 00:09:06.613 8317.018 - 8369.658: 10.3999% ( 467) 00:09:06.613 8369.658 - 8422.297: 14.2632% ( 539) 00:09:06.613 8422.297 - 8474.937: 18.5350% ( 596) 00:09:06.613 8474.937 - 8527.576: 23.0075% ( 624) 00:09:06.613 8527.576 - 8580.215: 27.7451% ( 661) 00:09:06.613 8580.215 - 8632.855: 32.5975% ( 677) 00:09:06.613 8632.855 - 8685.494: 37.5932% ( 697) 00:09:06.613 8685.494 - 8738.133: 42.8827% ( 738) 00:09:06.613 8738.133 - 8790.773: 48.4447% ( 776) 00:09:06.613 8790.773 - 8843.412: 54.1069% ( 790) 00:09:06.613 8843.412 - 8896.051: 59.7549% ( 788) 00:09:06.613 8896.051 - 8948.691: 65.2738% ( 770) 00:09:06.613 8948.691 - 9001.330: 70.6350% ( 748) 00:09:06.613 9001.330 - 9053.969: 75.3154% ( 653) 00:09:06.613 9053.969 - 9106.609: 79.3363% ( 561) 00:09:06.613 9106.609 - 9159.248: 82.4971% ( 441) 00:09:06.613 9159.248 - 9211.888: 85.0416% ( 355) 00:09:06.613 9211.888 - 9264.527: 87.1058% ( 288) 00:09:06.613 9264.527 - 9317.166: 88.7328% ( 227) 00:09:06.613 9317.166 - 9369.806: 90.1304% ( 195) 00:09:06.613 9369.806 - 9422.445: 91.2199% ( 152) 00:09:06.613 9422.445 - 9475.084: 92.1803% ( 134) 00:09:06.613 9475.084 - 9527.724: 92.9186% ( 103) 00:09:06.613 9527.724 - 9580.363: 93.4848% ( 79) 00:09:06.613 9580.363 - 9633.002: 93.8288% ( 48) 00:09:06.613 9633.002 - 9685.642: 94.0224% ( 27) 00:09:06.613 9685.642 - 9738.281: 94.1872% ( 23) 00:09:06.613 9738.281 - 9790.920: 94.3162% ( 18) 00:09:06.613 9790.920 - 9843.560: 94.4524% ( 19) 00:09:06.613 9843.560 - 9896.199: 94.5456% ( 13) 00:09:06.613 9896.199 - 9948.839: 94.6388% ( 13) 00:09:06.613 9948.839 - 10001.478: 94.6961% ( 8) 00:09:06.613 10001.478 - 10054.117: 94.7606% ( 9) 00:09:06.613 10054.117 - 10106.757: 94.8251% ( 9) 00:09:06.613 10106.757 - 10159.396: 94.8825% ( 8) 00:09:06.613 10159.396 - 10212.035: 94.9398% ( 8) 00:09:06.613 10212.035 - 10264.675: 95.0043% ( 9) 00:09:06.613 10264.675 - 10317.314: 95.0616% ( 8) 00:09:06.613 10317.314 - 10369.953: 95.1333% ( 10) 00:09:06.613 10369.953 - 10422.593: 95.2050% ( 10) 00:09:06.613 10422.593 - 10475.232: 95.2910% ( 12) 00:09:06.613 10475.232 - 10527.871: 95.3698% ( 11) 00:09:06.613 10527.871 - 10580.511: 95.4343% ( 9) 00:09:06.613 10580.511 - 10633.150: 95.4845% ( 7) 00:09:06.613 10633.150 - 10685.790: 95.4989% ( 2) 00:09:06.613 10685.790 - 10738.429: 95.5060% ( 1) 00:09:06.613 10738.429 - 10791.068: 95.5204% ( 2) 00:09:06.613 10791.068 - 10843.708: 95.5347% ( 2) 00:09:06.613 10843.708 - 10896.347: 95.5490% ( 2) 00:09:06.613 10896.347 - 10948.986: 95.5634% ( 2) 00:09:06.613 10948.986 - 11001.626: 95.5777% ( 2) 00:09:06.613 11001.626 - 11054.265: 95.5920% ( 2) 00:09:06.613 11054.265 - 11106.904: 95.6064% ( 2) 00:09:06.613 11106.904 - 11159.544: 95.6207% ( 2) 00:09:06.613 11159.544 - 11212.183: 95.6350% ( 2) 00:09:06.613 11212.183 - 11264.822: 95.6494% ( 2) 00:09:06.613 11264.822 - 11317.462: 95.6565% ( 1) 00:09:06.613 11317.462 - 11370.101: 95.6780% ( 3) 00:09:06.613 11370.101 - 11422.741: 95.7067% ( 4) 00:09:06.613 11422.741 - 11475.380: 95.7354% ( 4) 00:09:06.613 11475.380 - 11528.019: 95.7640% ( 4) 00:09:06.613 11528.019 - 11580.659: 95.7927% ( 4) 00:09:06.613 11580.659 - 11633.298: 95.8214% ( 4) 00:09:06.613 11633.298 - 11685.937: 95.8501% ( 4) 00:09:06.613 11685.937 - 11738.577: 95.8787% ( 4) 00:09:06.613 11738.577 - 11791.216: 95.9002% ( 3) 00:09:06.613 11791.216 - 11843.855: 95.9217% ( 3) 00:09:06.613 11843.855 - 11896.495: 95.9576% ( 5) 00:09:06.613 11896.495 - 11949.134: 95.9862% ( 4) 00:09:06.613 11949.134 - 12001.773: 96.0149% ( 4) 00:09:06.613 12001.773 - 12054.413: 96.0436% ( 4) 00:09:06.613 12054.413 - 12107.052: 96.0794% ( 5) 00:09:06.613 12107.052 - 12159.692: 96.1296% ( 7) 00:09:06.613 12159.692 - 12212.331: 96.1941% ( 9) 00:09:06.613 12212.331 - 12264.970: 96.2514% ( 8) 00:09:06.613 12264.970 - 12317.610: 96.3159% ( 9) 00:09:06.613 12317.610 - 12370.249: 96.3948% ( 11) 00:09:06.613 12370.249 - 12422.888: 96.4665% ( 10) 00:09:06.613 12422.888 - 12475.528: 96.5381% ( 10) 00:09:06.613 12475.528 - 12528.167: 96.6313% ( 13) 00:09:06.613 12528.167 - 12580.806: 96.7317% ( 14) 00:09:06.613 12580.806 - 12633.446: 96.8392% ( 15) 00:09:06.613 12633.446 - 12686.085: 96.9395% ( 14) 00:09:06.613 12686.085 - 12738.724: 97.0470% ( 15) 00:09:06.613 12738.724 - 12791.364: 97.1402% ( 13) 00:09:06.613 12791.364 - 12844.003: 97.2405% ( 14) 00:09:06.613 12844.003 - 12896.643: 97.3409% ( 14) 00:09:06.613 12896.643 - 12949.282: 97.4269% ( 12) 00:09:06.613 12949.282 - 13001.921: 97.4842% ( 8) 00:09:06.613 13001.921 - 13054.561: 97.5344% ( 7) 00:09:06.613 13054.561 - 13107.200: 97.5846% ( 7) 00:09:06.613 13107.200 - 13159.839: 97.6132% ( 4) 00:09:06.613 13159.839 - 13212.479: 97.6347% ( 3) 00:09:06.613 13212.479 - 13265.118: 97.6634% ( 4) 00:09:06.613 13265.118 - 13317.757: 97.6921% ( 4) 00:09:06.613 13317.757 - 13370.397: 97.7208% ( 4) 00:09:06.613 13370.397 - 13423.036: 97.7638% ( 6) 00:09:06.613 13423.036 - 13475.676: 97.8068% ( 6) 00:09:06.613 13475.676 - 13580.954: 97.9143% ( 15) 00:09:06.613 13580.954 - 13686.233: 98.0361% ( 17) 00:09:06.613 13686.233 - 13791.512: 98.1508% ( 16) 00:09:06.613 13791.512 - 13896.790: 98.2511% ( 14) 00:09:06.613 13896.790 - 14002.069: 98.3587% ( 15) 00:09:06.613 14002.069 - 14107.348: 98.4805% ( 17) 00:09:06.613 14107.348 - 14212.627: 98.5737% ( 13) 00:09:06.613 14212.627 - 14317.905: 98.6095% ( 5) 00:09:06.613 14317.905 - 14423.184: 98.6310% ( 3) 00:09:06.613 14423.184 - 14528.463: 98.6669% ( 5) 00:09:06.613 14528.463 - 14633.741: 98.7027% ( 5) 00:09:06.613 14633.741 - 14739.020: 98.7242% ( 3) 00:09:06.613 14739.020 - 14844.299: 98.7600% ( 5) 00:09:06.613 14844.299 - 14949.578: 98.7887% ( 4) 00:09:06.613 14949.578 - 15054.856: 98.8102% ( 3) 00:09:06.613 15054.856 - 15160.135: 98.8460% ( 5) 00:09:06.613 15160.135 - 15265.414: 98.8747% ( 4) 00:09:06.613 15265.414 - 15370.692: 98.9034% ( 4) 00:09:06.613 15370.692 - 15475.971: 98.9392% ( 5) 00:09:06.613 15475.971 - 15581.250: 98.9679% ( 4) 00:09:06.613 15581.250 - 15686.529: 99.0037% ( 5) 00:09:06.613 15686.529 - 15791.807: 99.0324% ( 4) 00:09:06.613 15791.807 - 15897.086: 99.0682% ( 5) 00:09:06.613 15897.086 - 16002.365: 99.0826% ( 2) 00:09:06.613 31583.614 - 31794.172: 99.1112% ( 4) 00:09:06.613 31794.172 - 32004.729: 99.2116% ( 14) 00:09:06.613 32004.729 - 32215.287: 99.3048% ( 13) 00:09:06.613 32215.287 - 32425.844: 99.3979% ( 13) 00:09:06.613 32425.844 - 32636.402: 99.4911% ( 13) 00:09:06.613 32636.402 - 32846.959: 99.5413% ( 7) 00:09:06.613 37268.665 - 37479.222: 99.5843% ( 6) 00:09:06.613 37479.222 - 37689.780: 99.6775% ( 13) 00:09:06.613 37689.780 - 37900.337: 99.7778% ( 14) 00:09:06.613 37900.337 - 38110.895: 99.8710% ( 13) 00:09:06.613 38110.895 - 38321.452: 99.9642% ( 13) 00:09:06.613 38321.452 - 38532.010: 100.0000% ( 5) 00:09:06.613 00:09:06.613 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:09:06.613 ============================================================================== 00:09:06.613 Range in us Cumulative IO count 00:09:06.613 4421.706 - 4448.026: 0.0143% ( 2) 00:09:06.613 4448.026 - 4474.345: 0.0215% ( 1) 00:09:06.613 4474.345 - 4500.665: 0.0430% ( 3) 00:09:06.613 4500.665 - 4526.985: 0.0645% ( 3) 00:09:06.613 4526.985 - 4553.304: 0.0860% ( 3) 00:09:06.613 4553.304 - 4579.624: 0.1003% ( 2) 00:09:06.613 4579.624 - 4605.944: 0.1147% ( 2) 00:09:06.613 4605.944 - 4632.263: 0.1290% ( 2) 00:09:06.613 4632.263 - 4658.583: 0.1362% ( 1) 00:09:06.613 4658.583 - 4684.903: 0.1505% ( 2) 00:09:06.613 4684.903 - 4711.222: 0.1649% ( 2) 00:09:06.613 4711.222 - 4737.542: 0.1792% ( 2) 00:09:06.613 4737.542 - 4763.862: 0.1935% ( 2) 00:09:06.613 4763.862 - 4790.182: 0.2079% ( 2) 00:09:06.613 4790.182 - 4816.501: 0.2294% ( 3) 00:09:06.613 4816.501 - 4842.821: 0.2437% ( 2) 00:09:06.613 4842.821 - 4869.141: 0.2652% ( 3) 00:09:06.613 4869.141 - 4895.460: 0.2867% ( 3) 00:09:06.613 4895.460 - 4921.780: 0.3010% ( 2) 00:09:06.613 4921.780 - 4948.100: 0.3154% ( 2) 00:09:06.613 4948.100 - 4974.419: 0.3369% ( 3) 00:09:06.613 4974.419 - 5000.739: 0.3512% ( 2) 00:09:06.613 5000.739 - 5027.059: 0.3727% ( 3) 00:09:06.613 5027.059 - 5053.378: 0.3942% ( 3) 00:09:06.613 5053.378 - 5079.698: 0.4085% ( 2) 00:09:06.613 5079.698 - 5106.018: 0.4300% ( 3) 00:09:06.613 5106.018 - 5132.337: 0.4444% ( 2) 00:09:06.613 5132.337 - 5158.657: 0.4587% ( 2) 00:09:06.613 7001.035 - 7053.674: 0.4946% ( 5) 00:09:06.613 7053.674 - 7106.313: 0.5232% ( 4) 00:09:06.613 7106.313 - 7158.953: 0.5591% ( 5) 00:09:06.613 7158.953 - 7211.592: 0.5949% ( 5) 00:09:06.613 7211.592 - 7264.231: 0.6307% ( 5) 00:09:06.613 7264.231 - 7316.871: 0.6594% ( 4) 00:09:06.614 7316.871 - 7369.510: 0.6952% ( 5) 00:09:06.614 7369.510 - 7422.149: 0.7239% ( 4) 00:09:06.614 7422.149 - 7474.789: 0.7597% ( 5) 00:09:06.614 7474.789 - 7527.428: 0.7884% ( 4) 00:09:06.614 7527.428 - 7580.067: 0.8243% ( 5) 00:09:06.614 7580.067 - 7632.707: 0.8529% ( 4) 00:09:06.614 7632.707 - 7685.346: 0.8888% ( 5) 00:09:06.614 7685.346 - 7737.986: 0.9174% ( 4) 00:09:06.614 7948.543 - 8001.182: 0.9318% ( 2) 00:09:06.614 8001.182 - 8053.822: 1.0393% ( 15) 00:09:06.614 8053.822 - 8106.461: 1.2400% ( 28) 00:09:06.614 8106.461 - 8159.100: 1.5912% ( 49) 00:09:06.614 8159.100 - 8211.740: 2.5444% ( 133) 00:09:06.614 8211.740 - 8264.379: 4.2861% ( 243) 00:09:06.614 8264.379 - 8317.018: 6.7804% ( 348) 00:09:06.614 8317.018 - 8369.658: 9.9842% ( 447) 00:09:06.614 8369.658 - 8422.297: 13.9192% ( 549) 00:09:06.614 8422.297 - 8474.937: 18.1049% ( 584) 00:09:06.614 8474.937 - 8527.576: 22.6061% ( 628) 00:09:06.614 8527.576 - 8580.215: 27.4083% ( 670) 00:09:06.614 8580.215 - 8632.855: 32.3610% ( 691) 00:09:06.614 8632.855 - 8685.494: 37.5502% ( 724) 00:09:06.614 8685.494 - 8738.133: 42.8684% ( 742) 00:09:06.614 8738.133 - 8790.773: 48.3802% ( 769) 00:09:06.614 8790.773 - 8843.412: 53.9994% ( 784) 00:09:06.614 8843.412 - 8896.051: 59.5542% ( 775) 00:09:06.614 8896.051 - 8948.691: 65.0588% ( 768) 00:09:06.614 8948.691 - 9001.330: 70.4630% ( 754) 00:09:06.614 9001.330 - 9053.969: 75.2580% ( 669) 00:09:06.614 9053.969 - 9106.609: 79.2646% ( 559) 00:09:06.614 9106.609 - 9159.248: 82.3466% ( 430) 00:09:06.614 9159.248 - 9211.888: 85.0201% ( 373) 00:09:06.614 9211.888 - 9264.527: 86.9911% ( 275) 00:09:06.614 9264.527 - 9317.166: 88.6540% ( 232) 00:09:06.614 9317.166 - 9369.806: 90.1018% ( 202) 00:09:06.614 9369.806 - 9422.445: 91.1984% ( 153) 00:09:06.614 9422.445 - 9475.084: 92.1373% ( 131) 00:09:06.614 9475.084 - 9527.724: 92.9688% ( 116) 00:09:06.614 9527.724 - 9580.363: 93.4776% ( 71) 00:09:06.614 9580.363 - 9633.002: 93.7930% ( 44) 00:09:06.614 9633.002 - 9685.642: 94.0009% ( 29) 00:09:06.614 9685.642 - 9738.281: 94.1442% ( 20) 00:09:06.614 9738.281 - 9790.920: 94.2804% ( 19) 00:09:06.614 9790.920 - 9843.560: 94.3736% ( 13) 00:09:06.614 9843.560 - 9896.199: 94.4739% ( 14) 00:09:06.614 9896.199 - 9948.839: 94.5599% ( 12) 00:09:06.614 9948.839 - 10001.478: 94.6388% ( 11) 00:09:06.614 10001.478 - 10054.117: 94.7248% ( 12) 00:09:06.614 10054.117 - 10106.757: 94.8036% ( 11) 00:09:06.614 10106.757 - 10159.396: 94.8681% ( 9) 00:09:06.614 10159.396 - 10212.035: 94.9470% ( 11) 00:09:06.614 10212.035 - 10264.675: 95.0258% ( 11) 00:09:06.614 10264.675 - 10317.314: 95.0975% ( 10) 00:09:06.614 10317.314 - 10369.953: 95.1763% ( 11) 00:09:06.614 10369.953 - 10422.593: 95.2552% ( 11) 00:09:06.614 10422.593 - 10475.232: 95.3197% ( 9) 00:09:06.614 10475.232 - 10527.871: 95.3483% ( 4) 00:09:06.614 10527.871 - 10580.511: 95.3698% ( 3) 00:09:06.614 10580.511 - 10633.150: 95.3985% ( 4) 00:09:06.614 10633.150 - 10685.790: 95.4272% ( 4) 00:09:06.614 10685.790 - 10738.429: 95.4558% ( 4) 00:09:06.614 10738.429 - 10791.068: 95.4774% ( 3) 00:09:06.614 10791.068 - 10843.708: 95.5060% ( 4) 00:09:06.614 10843.708 - 10896.347: 95.5347% ( 4) 00:09:06.614 10896.347 - 10948.986: 95.5634% ( 4) 00:09:06.614 10948.986 - 11001.626: 95.5849% ( 3) 00:09:06.614 11001.626 - 11054.265: 95.6135% ( 4) 00:09:06.614 11054.265 - 11106.904: 95.6279% ( 2) 00:09:06.614 11106.904 - 11159.544: 95.6565% ( 4) 00:09:06.614 11159.544 - 11212.183: 95.6780% ( 3) 00:09:06.614 11212.183 - 11264.822: 95.7067% ( 4) 00:09:06.614 11264.822 - 11317.462: 95.7354% ( 4) 00:09:06.614 11317.462 - 11370.101: 95.7569% ( 3) 00:09:06.614 11370.101 - 11422.741: 95.7712% ( 2) 00:09:06.614 11422.741 - 11475.380: 95.7856% ( 2) 00:09:06.614 11475.380 - 11528.019: 95.7999% ( 2) 00:09:06.614 11528.019 - 11580.659: 95.8071% ( 1) 00:09:06.614 11580.659 - 11633.298: 95.8214% ( 2) 00:09:06.614 11633.298 - 11685.937: 95.8357% ( 2) 00:09:06.614 11685.937 - 11738.577: 95.8501% ( 2) 00:09:06.614 11738.577 - 11791.216: 95.8644% ( 2) 00:09:06.614 11791.216 - 11843.855: 95.8787% ( 2) 00:09:06.614 11843.855 - 11896.495: 95.8931% ( 2) 00:09:06.614 11896.495 - 11949.134: 95.9217% ( 4) 00:09:06.614 11949.134 - 12001.773: 95.9576% ( 5) 00:09:06.614 12001.773 - 12054.413: 95.9719% ( 2) 00:09:06.614 12054.413 - 12107.052: 96.0364% ( 9) 00:09:06.614 12107.052 - 12159.692: 96.0938% ( 8) 00:09:06.614 12159.692 - 12212.331: 96.1583% ( 9) 00:09:06.614 12212.331 - 12264.970: 96.2443% ( 12) 00:09:06.614 12264.970 - 12317.610: 96.3518% ( 15) 00:09:06.614 12317.610 - 12370.249: 96.4521% ( 14) 00:09:06.614 12370.249 - 12422.888: 96.5596% ( 15) 00:09:06.614 12422.888 - 12475.528: 96.6528% ( 13) 00:09:06.614 12475.528 - 12528.167: 96.7532% ( 14) 00:09:06.614 12528.167 - 12580.806: 96.8535% ( 14) 00:09:06.614 12580.806 - 12633.446: 96.9538% ( 14) 00:09:06.614 12633.446 - 12686.085: 97.0470% ( 13) 00:09:06.614 12686.085 - 12738.724: 97.1402% ( 13) 00:09:06.614 12738.724 - 12791.364: 97.2405% ( 14) 00:09:06.614 12791.364 - 12844.003: 97.3409% ( 14) 00:09:06.614 12844.003 - 12896.643: 97.4197% ( 11) 00:09:06.614 12896.643 - 12949.282: 97.5057% ( 12) 00:09:06.614 12949.282 - 13001.921: 97.5846% ( 11) 00:09:06.614 13001.921 - 13054.561: 97.6634% ( 11) 00:09:06.614 13054.561 - 13107.200: 97.7208% ( 8) 00:09:06.614 13107.200 - 13159.839: 97.7853% ( 9) 00:09:06.614 13159.839 - 13212.479: 97.8283% ( 6) 00:09:06.614 13212.479 - 13265.118: 97.8713% ( 6) 00:09:06.614 13265.118 - 13317.757: 97.9071% ( 5) 00:09:06.614 13317.757 - 13370.397: 97.9501% ( 6) 00:09:06.614 13370.397 - 13423.036: 97.9931% ( 6) 00:09:06.614 13423.036 - 13475.676: 98.0576% ( 9) 00:09:06.614 13475.676 - 13580.954: 98.1580% ( 14) 00:09:06.614 13580.954 - 13686.233: 98.2655% ( 15) 00:09:06.614 13686.233 - 13791.512: 98.3515% ( 12) 00:09:06.614 13791.512 - 13896.790: 98.4303% ( 11) 00:09:06.614 13896.790 - 14002.069: 98.4948% ( 9) 00:09:06.614 14002.069 - 14107.348: 98.5737% ( 11) 00:09:06.614 14107.348 - 14212.627: 98.6525% ( 11) 00:09:06.614 14212.627 - 14317.905: 98.6884% ( 5) 00:09:06.614 14317.905 - 14423.184: 98.7099% ( 3) 00:09:06.614 14423.184 - 14528.463: 98.7385% ( 4) 00:09:06.614 14528.463 - 14633.741: 98.7600% ( 3) 00:09:06.614 14633.741 - 14739.020: 98.7815% ( 3) 00:09:06.614 14739.020 - 14844.299: 98.8030% ( 3) 00:09:06.614 14844.299 - 14949.578: 98.8174% ( 2) 00:09:06.614 14949.578 - 15054.856: 98.8460% ( 4) 00:09:06.614 15054.856 - 15160.135: 98.8675% ( 3) 00:09:06.614 15160.135 - 15265.414: 98.8819% ( 2) 00:09:06.614 15265.414 - 15370.692: 98.9034% ( 3) 00:09:06.614 15370.692 - 15475.971: 98.9249% ( 3) 00:09:06.614 15475.971 - 15581.250: 98.9464% ( 3) 00:09:06.614 15581.250 - 15686.529: 98.9679% ( 3) 00:09:06.614 15686.529 - 15791.807: 98.9894% ( 3) 00:09:06.614 15791.807 - 15897.086: 99.0037% ( 2) 00:09:06.614 15897.086 - 16002.365: 99.0252% ( 3) 00:09:06.614 16002.365 - 16107.643: 99.0539% ( 4) 00:09:06.614 16107.643 - 16212.922: 99.0754% ( 3) 00:09:06.614 16212.922 - 16318.201: 99.0826% ( 1) 00:09:06.614 31162.500 - 31373.057: 99.1327% ( 7) 00:09:06.614 31373.057 - 31583.614: 99.2259% ( 13) 00:09:06.614 31583.614 - 31794.172: 99.3263% ( 14) 00:09:06.614 31794.172 - 32004.729: 99.4123% ( 12) 00:09:06.614 32004.729 - 32215.287: 99.4983% ( 12) 00:09:06.614 32215.287 - 32425.844: 99.5413% ( 6) 00:09:06.614 36847.550 - 37058.108: 99.6201% ( 11) 00:09:06.614 37058.108 - 37268.665: 99.7133% ( 13) 00:09:06.614 37268.665 - 37479.222: 99.7993% ( 12) 00:09:06.614 37479.222 - 37689.780: 99.8925% ( 13) 00:09:06.614 37689.780 - 37900.337: 99.9857% ( 13) 00:09:06.614 37900.337 - 38110.895: 100.0000% ( 2) 00:09:06.614 00:09:06.614 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:09:06.614 ============================================================================== 00:09:06.614 Range in us Cumulative IO count 00:09:06.614 4053.231 - 4079.550: 0.0143% ( 2) 00:09:06.614 4079.550 - 4105.870: 0.0287% ( 2) 00:09:06.614 4105.870 - 4132.190: 0.0430% ( 2) 00:09:06.614 4132.190 - 4158.509: 0.0645% ( 3) 00:09:06.614 4158.509 - 4184.829: 0.0860% ( 3) 00:09:06.614 4184.829 - 4211.149: 0.1003% ( 2) 00:09:06.614 4211.149 - 4237.468: 0.1218% ( 3) 00:09:06.614 4237.468 - 4263.788: 0.1362% ( 2) 00:09:06.614 4263.788 - 4290.108: 0.1505% ( 2) 00:09:06.614 4290.108 - 4316.427: 0.1720% ( 3) 00:09:06.614 4316.427 - 4342.747: 0.1864% ( 2) 00:09:06.614 4342.747 - 4369.067: 0.2079% ( 3) 00:09:06.614 4369.067 - 4395.386: 0.2222% ( 2) 00:09:06.614 4395.386 - 4421.706: 0.2437% ( 3) 00:09:06.614 4421.706 - 4448.026: 0.2580% ( 2) 00:09:06.614 4448.026 - 4474.345: 0.2795% ( 3) 00:09:06.614 4474.345 - 4500.665: 0.2939% ( 2) 00:09:06.614 4500.665 - 4526.985: 0.3154% ( 3) 00:09:06.614 4526.985 - 4553.304: 0.3297% ( 2) 00:09:06.614 4553.304 - 4579.624: 0.3512% ( 3) 00:09:06.614 4579.624 - 4605.944: 0.3655% ( 2) 00:09:06.614 4605.944 - 4632.263: 0.3870% ( 3) 00:09:06.614 4632.263 - 4658.583: 0.4014% ( 2) 00:09:06.614 4658.583 - 4684.903: 0.4229% ( 3) 00:09:06.614 4684.903 - 4711.222: 0.4372% ( 2) 00:09:06.614 4711.222 - 4737.542: 0.4587% ( 3) 00:09:06.614 6658.879 - 6685.198: 0.4659% ( 1) 00:09:06.614 6685.198 - 6711.518: 0.4802% ( 2) 00:09:06.614 6711.518 - 6737.838: 0.4946% ( 2) 00:09:06.615 6737.838 - 6790.477: 0.5232% ( 4) 00:09:06.615 6790.477 - 6843.116: 0.5662% ( 6) 00:09:06.615 6843.116 - 6895.756: 0.6021% ( 5) 00:09:06.615 6895.756 - 6948.395: 0.6379% ( 5) 00:09:06.615 6948.395 - 7001.035: 0.6737% ( 5) 00:09:06.615 7001.035 - 7053.674: 0.7096% ( 5) 00:09:06.615 7053.674 - 7106.313: 0.7454% ( 5) 00:09:06.615 7106.313 - 7158.953: 0.7812% ( 5) 00:09:06.615 7158.953 - 7211.592: 0.8171% ( 5) 00:09:06.615 7211.592 - 7264.231: 0.8458% ( 4) 00:09:06.615 7264.231 - 7316.871: 0.8888% ( 6) 00:09:06.615 7316.871 - 7369.510: 0.9174% ( 4) 00:09:06.615 7948.543 - 8001.182: 0.9318% ( 2) 00:09:06.615 8001.182 - 8053.822: 1.0321% ( 14) 00:09:06.615 8053.822 - 8106.461: 1.2256% ( 27) 00:09:06.615 8106.461 - 8159.100: 1.5697% ( 48) 00:09:06.615 8159.100 - 8211.740: 2.7953% ( 171) 00:09:06.615 8211.740 - 8264.379: 4.5083% ( 239) 00:09:06.615 8264.379 - 8317.018: 7.0169% ( 350) 00:09:06.615 8317.018 - 8369.658: 10.2638% ( 453) 00:09:06.615 8369.658 - 8422.297: 14.0625% ( 530) 00:09:06.615 8422.297 - 8474.937: 18.2268% ( 581) 00:09:06.615 8474.937 - 8527.576: 22.6921% ( 623) 00:09:06.615 8527.576 - 8580.215: 27.4871% ( 669) 00:09:06.615 8580.215 - 8632.855: 32.5473% ( 706) 00:09:06.615 8632.855 - 8685.494: 37.6362% ( 710) 00:09:06.615 8685.494 - 8738.133: 42.8541% ( 728) 00:09:06.615 8738.133 - 8790.773: 48.4303% ( 778) 00:09:06.615 8790.773 - 8843.412: 54.0066% ( 778) 00:09:06.615 8843.412 - 8896.051: 59.6115% ( 782) 00:09:06.615 8896.051 - 8948.691: 65.1233% ( 769) 00:09:06.615 8948.691 - 9001.330: 70.3197% ( 725) 00:09:06.615 9001.330 - 9053.969: 74.9785% ( 650) 00:09:06.615 9053.969 - 9106.609: 78.9636% ( 556) 00:09:06.615 9106.609 - 9159.248: 82.3036% ( 466) 00:09:06.615 9159.248 - 9211.888: 85.0989% ( 390) 00:09:06.615 9211.888 - 9264.527: 87.2205% ( 296) 00:09:06.615 9264.527 - 9317.166: 88.8833% ( 232) 00:09:06.615 9317.166 - 9369.806: 90.1878% ( 182) 00:09:06.615 9369.806 - 9422.445: 91.3632% ( 164) 00:09:06.615 9422.445 - 9475.084: 92.2520% ( 124) 00:09:06.615 9475.084 - 9527.724: 92.9114% ( 92) 00:09:06.615 9527.724 - 9580.363: 93.4848% ( 80) 00:09:06.615 9580.363 - 9633.002: 93.8288% ( 48) 00:09:06.615 9633.002 - 9685.642: 94.0725% ( 34) 00:09:06.615 9685.642 - 9738.281: 94.2947% ( 31) 00:09:06.615 9738.281 - 9790.920: 94.4166% ( 17) 00:09:06.615 9790.920 - 9843.560: 94.5312% ( 16) 00:09:06.615 9843.560 - 9896.199: 94.6531% ( 17) 00:09:06.615 9896.199 - 9948.839: 94.7821% ( 18) 00:09:06.615 9948.839 - 10001.478: 94.8753% ( 13) 00:09:06.615 10001.478 - 10054.117: 94.9756% ( 14) 00:09:06.615 10054.117 - 10106.757: 95.0831% ( 15) 00:09:06.615 10106.757 - 10159.396: 95.1907% ( 15) 00:09:06.615 10159.396 - 10212.035: 95.2910% ( 14) 00:09:06.615 10212.035 - 10264.675: 95.3698% ( 11) 00:09:06.615 10264.675 - 10317.314: 95.4487% ( 11) 00:09:06.615 10317.314 - 10369.953: 95.5060% ( 8) 00:09:06.615 10369.953 - 10422.593: 95.5634% ( 8) 00:09:06.615 10422.593 - 10475.232: 95.6135% ( 7) 00:09:06.615 10475.232 - 10527.871: 95.6279% ( 2) 00:09:06.615 10527.871 - 10580.511: 95.6565% ( 4) 00:09:06.615 10580.511 - 10633.150: 95.6709% ( 2) 00:09:06.615 10633.150 - 10685.790: 95.6852% ( 2) 00:09:06.615 10685.790 - 10738.429: 95.6995% ( 2) 00:09:06.615 10738.429 - 10791.068: 95.7139% ( 2) 00:09:06.615 10791.068 - 10843.708: 95.7282% ( 2) 00:09:06.615 10843.708 - 10896.347: 95.7425% ( 2) 00:09:06.615 10896.347 - 10948.986: 95.7569% ( 2) 00:09:06.615 10948.986 - 11001.626: 95.7640% ( 1) 00:09:06.615 11001.626 - 11054.265: 95.7784% ( 2) 00:09:06.615 11054.265 - 11106.904: 95.7856% ( 1) 00:09:06.615 11106.904 - 11159.544: 95.7999% ( 2) 00:09:06.615 11159.544 - 11212.183: 95.8142% ( 2) 00:09:06.615 11212.183 - 11264.822: 95.8214% ( 1) 00:09:06.615 11264.822 - 11317.462: 95.8357% ( 2) 00:09:06.615 11317.462 - 11370.101: 95.8501% ( 2) 00:09:06.615 11370.101 - 11422.741: 95.8644% ( 2) 00:09:06.615 11422.741 - 11475.380: 95.8716% ( 1) 00:09:06.615 11685.937 - 11738.577: 95.8787% ( 1) 00:09:06.615 11791.216 - 11843.855: 95.8931% ( 2) 00:09:06.615 11843.855 - 11896.495: 95.9074% ( 2) 00:09:06.615 11896.495 - 11949.134: 95.9217% ( 2) 00:09:06.615 11949.134 - 12001.773: 95.9361% ( 2) 00:09:06.615 12001.773 - 12054.413: 95.9504% ( 2) 00:09:06.615 12054.413 - 12107.052: 95.9576% ( 1) 00:09:06.615 12107.052 - 12159.692: 96.0077% ( 7) 00:09:06.615 12159.692 - 12212.331: 96.0579% ( 7) 00:09:06.615 12212.331 - 12264.970: 96.1009% ( 6) 00:09:06.615 12264.970 - 12317.610: 96.1726% ( 10) 00:09:06.615 12317.610 - 12370.249: 96.2371% ( 9) 00:09:06.615 12370.249 - 12422.888: 96.3088% ( 10) 00:09:06.615 12422.888 - 12475.528: 96.3804% ( 10) 00:09:06.615 12475.528 - 12528.167: 96.4736% ( 13) 00:09:06.615 12528.167 - 12580.806: 96.5668% ( 13) 00:09:06.615 12580.806 - 12633.446: 96.6528% ( 12) 00:09:06.615 12633.446 - 12686.085: 96.7245% ( 10) 00:09:06.615 12686.085 - 12738.724: 96.8033% ( 11) 00:09:06.615 12738.724 - 12791.364: 96.8893% ( 12) 00:09:06.615 12791.364 - 12844.003: 96.9610% ( 10) 00:09:06.615 12844.003 - 12896.643: 97.0470% ( 12) 00:09:06.615 12896.643 - 12949.282: 97.1259% ( 11) 00:09:06.615 12949.282 - 13001.921: 97.1975% ( 10) 00:09:06.615 13001.921 - 13054.561: 97.2620% ( 9) 00:09:06.615 13054.561 - 13107.200: 97.3337% ( 10) 00:09:06.615 13107.200 - 13159.839: 97.4197% ( 12) 00:09:06.615 13159.839 - 13212.479: 97.4914% ( 10) 00:09:06.615 13212.479 - 13265.118: 97.5416% ( 7) 00:09:06.615 13265.118 - 13317.757: 97.5917% ( 7) 00:09:06.615 13317.757 - 13370.397: 97.6562% ( 9) 00:09:06.615 13370.397 - 13423.036: 97.7279% ( 10) 00:09:06.615 13423.036 - 13475.676: 97.7853% ( 8) 00:09:06.615 13475.676 - 13580.954: 97.9429% ( 22) 00:09:06.615 13580.954 - 13686.233: 98.0791% ( 19) 00:09:06.615 13686.233 - 13791.512: 98.2440% ( 23) 00:09:06.615 13791.512 - 13896.790: 98.3443% ( 14) 00:09:06.615 13896.790 - 14002.069: 98.4662% ( 17) 00:09:06.615 14002.069 - 14107.348: 98.5450% ( 11) 00:09:06.615 14107.348 - 14212.627: 98.6167% ( 10) 00:09:06.615 14212.627 - 14317.905: 98.6955% ( 11) 00:09:06.615 14317.905 - 14423.184: 98.7385% ( 6) 00:09:06.615 14423.184 - 14528.463: 98.7600% ( 3) 00:09:06.615 14528.463 - 14633.741: 98.7815% ( 3) 00:09:06.615 14633.741 - 14739.020: 98.8030% ( 3) 00:09:06.615 14739.020 - 14844.299: 98.8245% ( 3) 00:09:06.615 14844.299 - 14949.578: 98.8460% ( 3) 00:09:06.615 14949.578 - 15054.856: 98.8675% ( 3) 00:09:06.615 15054.856 - 15160.135: 98.8890% ( 3) 00:09:06.615 15160.135 - 15265.414: 98.9106% ( 3) 00:09:06.615 15265.414 - 15370.692: 98.9321% ( 3) 00:09:06.615 15370.692 - 15475.971: 98.9536% ( 3) 00:09:06.615 15475.971 - 15581.250: 98.9751% ( 3) 00:09:06.615 15581.250 - 15686.529: 98.9894% ( 2) 00:09:06.615 15686.529 - 15791.807: 99.0109% ( 3) 00:09:06.615 15791.807 - 15897.086: 99.0396% ( 4) 00:09:06.615 15897.086 - 16002.365: 99.0539% ( 2) 00:09:06.615 16002.365 - 16107.643: 99.0754% ( 3) 00:09:06.615 16107.643 - 16212.922: 99.0826% ( 1) 00:09:06.615 30741.385 - 30951.942: 99.1614% ( 11) 00:09:06.615 30951.942 - 31162.500: 99.2546% ( 13) 00:09:06.615 31162.500 - 31373.057: 99.3478% ( 13) 00:09:06.615 31373.057 - 31583.614: 99.4481% ( 14) 00:09:06.615 31583.614 - 31794.172: 99.5413% ( 13) 00:09:06.615 36215.878 - 36426.435: 99.5700% ( 4) 00:09:06.615 36426.435 - 36636.993: 99.6703% ( 14) 00:09:06.615 36636.993 - 36847.550: 99.7635% ( 13) 00:09:06.615 36847.550 - 37058.108: 99.8567% ( 13) 00:09:06.615 37058.108 - 37268.665: 99.9427% ( 12) 00:09:06.615 37268.665 - 37479.222: 100.0000% ( 8) 00:09:06.615 00:09:06.616 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:09:06.616 ============================================================================== 00:09:06.616 Range in us Cumulative IO count 00:09:06.616 3579.476 - 3605.796: 0.0071% ( 1) 00:09:06.616 3605.796 - 3632.116: 0.0357% ( 4) 00:09:06.616 3632.116 - 3658.435: 0.0499% ( 2) 00:09:06.616 3658.435 - 3684.755: 0.0642% ( 2) 00:09:06.616 3684.755 - 3711.075: 0.0785% ( 2) 00:09:06.616 3711.075 - 3737.394: 0.0999% ( 3) 00:09:06.616 3737.394 - 3763.714: 0.1142% ( 2) 00:09:06.616 3763.714 - 3790.034: 0.1356% ( 3) 00:09:06.616 3790.034 - 3816.353: 0.1498% ( 2) 00:09:06.616 3816.353 - 3842.673: 0.1641% ( 2) 00:09:06.616 3842.673 - 3868.993: 0.1784% ( 2) 00:09:06.616 3868.993 - 3895.312: 0.1998% ( 3) 00:09:06.616 3895.312 - 3921.632: 0.2140% ( 2) 00:09:06.616 3921.632 - 3947.952: 0.2354% ( 3) 00:09:06.616 3947.952 - 3974.271: 0.2497% ( 2) 00:09:06.616 3974.271 - 4000.591: 0.2711% ( 3) 00:09:06.616 4000.591 - 4026.911: 0.2925% ( 3) 00:09:06.616 4026.911 - 4053.231: 0.3068% ( 2) 00:09:06.616 4053.231 - 4079.550: 0.3282% ( 3) 00:09:06.616 4079.550 - 4105.870: 0.3425% ( 2) 00:09:06.616 4105.870 - 4132.190: 0.3639% ( 3) 00:09:06.616 4132.190 - 4158.509: 0.3781% ( 2) 00:09:06.616 4158.509 - 4184.829: 0.3995% ( 3) 00:09:06.616 4184.829 - 4211.149: 0.4209% ( 3) 00:09:06.616 4211.149 - 4237.468: 0.4352% ( 2) 00:09:06.616 4237.468 - 4263.788: 0.4566% ( 3) 00:09:06.616 6316.723 - 6343.043: 0.4709% ( 2) 00:09:06.616 6343.043 - 6369.362: 0.4923% ( 3) 00:09:06.616 6369.362 - 6395.682: 0.5066% ( 2) 00:09:06.616 6395.682 - 6422.002: 0.5208% ( 2) 00:09:06.616 6422.002 - 6448.321: 0.5351% ( 2) 00:09:06.616 6448.321 - 6474.641: 0.5565% ( 3) 00:09:06.616 6474.641 - 6500.961: 0.5779% ( 3) 00:09:06.616 6500.961 - 6527.280: 0.5922% ( 2) 00:09:06.616 6527.280 - 6553.600: 0.6064% ( 2) 00:09:06.616 6553.600 - 6579.920: 0.6279% ( 3) 00:09:06.616 6579.920 - 6606.239: 0.6421% ( 2) 00:09:06.616 6606.239 - 6632.559: 0.6635% ( 3) 00:09:06.616 6632.559 - 6658.879: 0.6778% ( 2) 00:09:06.616 6658.879 - 6685.198: 0.6992% ( 3) 00:09:06.616 6685.198 - 6711.518: 0.7135% ( 2) 00:09:06.616 6711.518 - 6737.838: 0.7349% ( 3) 00:09:06.616 6737.838 - 6790.477: 0.7634% ( 4) 00:09:06.616 6790.477 - 6843.116: 0.7991% ( 5) 00:09:06.616 6843.116 - 6895.756: 0.8348% ( 5) 00:09:06.616 6895.756 - 6948.395: 0.8704% ( 5) 00:09:06.616 6948.395 - 7001.035: 0.9061% ( 5) 00:09:06.616 7001.035 - 7053.674: 0.9132% ( 1) 00:09:06.616 7948.543 - 8001.182: 0.9346% ( 3) 00:09:06.616 8001.182 - 8053.822: 1.0203% ( 12) 00:09:06.616 8053.822 - 8106.461: 1.1915% ( 24) 00:09:06.616 8106.461 - 8159.100: 1.6410% ( 63) 00:09:06.616 8159.100 - 8211.740: 2.6398% ( 140) 00:09:06.616 8211.740 - 8264.379: 4.4235% ( 250) 00:09:06.616 8264.379 - 8317.018: 7.0634% ( 370) 00:09:06.616 8317.018 - 8369.658: 10.2026% ( 440) 00:09:06.616 8369.658 - 8422.297: 13.9769% ( 529) 00:09:06.616 8422.297 - 8474.937: 18.1364% ( 583) 00:09:06.616 8474.937 - 8527.576: 22.7882% ( 652) 00:09:06.616 8527.576 - 8580.215: 27.5186% ( 663) 00:09:06.616 8580.215 - 8632.855: 32.3701% ( 680) 00:09:06.616 8632.855 - 8685.494: 37.5000% ( 719) 00:09:06.616 8685.494 - 8738.133: 42.9438% ( 763) 00:09:06.616 8738.133 - 8790.773: 48.4161% ( 767) 00:09:06.616 8790.773 - 8843.412: 53.9170% ( 771) 00:09:06.616 8843.412 - 8896.051: 59.4249% ( 772) 00:09:06.616 8896.051 - 8948.691: 64.9543% ( 775) 00:09:06.616 8948.691 - 9001.330: 70.1484% ( 728) 00:09:06.616 9001.330 - 9053.969: 74.9215% ( 669) 00:09:06.616 9053.969 - 9106.609: 78.8813% ( 555) 00:09:06.616 9106.609 - 9159.248: 82.1276% ( 455) 00:09:06.616 9159.248 - 9211.888: 84.7745% ( 371) 00:09:06.616 9211.888 - 9264.527: 86.9792% ( 309) 00:09:06.616 9264.527 - 9317.166: 88.6701% ( 237) 00:09:06.616 9317.166 - 9369.806: 90.0614% ( 195) 00:09:06.616 9369.806 - 9422.445: 91.1387% ( 151) 00:09:06.616 9422.445 - 9475.084: 92.0163% ( 123) 00:09:06.616 9475.084 - 9527.724: 92.7012% ( 96) 00:09:06.616 9527.724 - 9580.363: 93.1935% ( 69) 00:09:06.616 9580.363 - 9633.002: 93.5431% ( 49) 00:09:06.616 9633.002 - 9685.642: 93.7999% ( 36) 00:09:06.616 9685.642 - 9738.281: 93.9498% ( 21) 00:09:06.616 9738.281 - 9790.920: 94.0996% ( 21) 00:09:06.616 9790.920 - 9843.560: 94.2209% ( 17) 00:09:06.616 9843.560 - 9896.199: 94.3279% ( 15) 00:09:06.616 9896.199 - 9948.839: 94.4278% ( 14) 00:09:06.616 9948.839 - 10001.478: 94.4991% ( 10) 00:09:06.616 10001.478 - 10054.117: 94.5705% ( 10) 00:09:06.616 10054.117 - 10106.757: 94.6490% ( 11) 00:09:06.616 10106.757 - 10159.396: 94.7275% ( 11) 00:09:06.616 10159.396 - 10212.035: 94.8202% ( 13) 00:09:06.616 10212.035 - 10264.675: 94.9201% ( 14) 00:09:06.616 10264.675 - 10317.314: 95.0057% ( 12) 00:09:06.616 10317.314 - 10369.953: 95.0771% ( 10) 00:09:06.616 10369.953 - 10422.593: 95.1341% ( 8) 00:09:06.616 10422.593 - 10475.232: 95.1912% ( 8) 00:09:06.616 10475.232 - 10527.871: 95.2626% ( 10) 00:09:06.616 10527.871 - 10580.511: 95.3196% ( 8) 00:09:06.616 10580.511 - 10633.150: 95.3838% ( 9) 00:09:06.616 10633.150 - 10685.790: 95.4481% ( 9) 00:09:06.616 10685.790 - 10738.429: 95.5051% ( 8) 00:09:06.616 10738.429 - 10791.068: 95.5622% ( 8) 00:09:06.616 10791.068 - 10843.708: 95.6336% ( 10) 00:09:06.616 10843.708 - 10896.347: 95.6835% ( 7) 00:09:06.616 10896.347 - 10948.986: 95.7477% ( 9) 00:09:06.616 10948.986 - 11001.626: 95.7977% ( 7) 00:09:06.616 11001.626 - 11054.265: 95.8333% ( 5) 00:09:06.616 11054.265 - 11106.904: 95.8476% ( 2) 00:09:06.616 11106.904 - 11159.544: 95.8619% ( 2) 00:09:06.616 11159.544 - 11212.183: 95.8761% ( 2) 00:09:06.616 11212.183 - 11264.822: 95.8904% ( 2) 00:09:06.616 11370.101 - 11422.741: 95.9047% ( 2) 00:09:06.616 11422.741 - 11475.380: 95.9261% ( 3) 00:09:06.616 11475.380 - 11528.019: 95.9404% ( 2) 00:09:06.616 11528.019 - 11580.659: 95.9546% ( 2) 00:09:06.616 11580.659 - 11633.298: 95.9689% ( 2) 00:09:06.616 11633.298 - 11685.937: 95.9832% ( 2) 00:09:06.616 11685.937 - 11738.577: 95.9903% ( 1) 00:09:06.616 11738.577 - 11791.216: 96.0046% ( 2) 00:09:06.616 11791.216 - 11843.855: 96.0188% ( 2) 00:09:06.616 11843.855 - 11896.495: 96.0402% ( 3) 00:09:06.616 11896.495 - 11949.134: 96.0545% ( 2) 00:09:06.616 11949.134 - 12001.773: 96.0688% ( 2) 00:09:06.616 12001.773 - 12054.413: 96.0759% ( 1) 00:09:06.616 12054.413 - 12107.052: 96.0902% ( 2) 00:09:06.616 12107.052 - 12159.692: 96.1045% ( 2) 00:09:06.616 12159.692 - 12212.331: 96.1615% ( 8) 00:09:06.616 12212.331 - 12264.970: 96.1972% ( 5) 00:09:06.616 12264.970 - 12317.610: 96.2757% ( 11) 00:09:06.616 12317.610 - 12370.249: 96.3328% ( 8) 00:09:06.616 12370.249 - 12422.888: 96.4041% ( 10) 00:09:06.616 12422.888 - 12475.528: 96.4755% ( 10) 00:09:06.616 12475.528 - 12528.167: 96.5539% ( 11) 00:09:06.616 12528.167 - 12580.806: 96.6253% ( 10) 00:09:06.616 12580.806 - 12633.446: 96.6752% ( 7) 00:09:06.616 12633.446 - 12686.085: 96.7394% ( 9) 00:09:06.616 12686.085 - 12738.724: 96.8037% ( 9) 00:09:06.616 12738.724 - 12791.364: 96.8536% ( 7) 00:09:06.616 12791.364 - 12844.003: 96.9178% ( 9) 00:09:06.616 12844.003 - 12896.643: 96.9892% ( 10) 00:09:06.616 12896.643 - 12949.282: 97.0676% ( 11) 00:09:06.616 12949.282 - 13001.921: 97.1461% ( 11) 00:09:06.616 13001.921 - 13054.561: 97.2246% ( 11) 00:09:06.616 13054.561 - 13107.200: 97.2888% ( 9) 00:09:06.616 13107.200 - 13159.839: 97.3245% ( 5) 00:09:06.616 13159.839 - 13212.479: 97.3388% ( 2) 00:09:06.616 13212.479 - 13265.118: 97.3602% ( 3) 00:09:06.616 13265.118 - 13317.757: 97.3673% ( 1) 00:09:06.616 13317.757 - 13370.397: 97.4244% ( 8) 00:09:06.616 13370.397 - 13423.036: 97.4458% ( 3) 00:09:06.616 13423.036 - 13475.676: 97.4957% ( 7) 00:09:06.616 13475.676 - 13580.954: 97.6027% ( 15) 00:09:06.616 13580.954 - 13686.233: 97.7454% ( 20) 00:09:06.616 13686.233 - 13791.512: 97.9167% ( 24) 00:09:06.616 13791.512 - 13896.790: 98.0879% ( 24) 00:09:06.616 13896.790 - 14002.069: 98.2520% ( 23) 00:09:06.616 14002.069 - 14107.348: 98.4232% ( 24) 00:09:06.616 14107.348 - 14212.627: 98.5731% ( 21) 00:09:06.616 14212.627 - 14317.905: 98.6872% ( 16) 00:09:06.616 14317.905 - 14423.184: 98.7800% ( 13) 00:09:06.616 14423.184 - 14528.463: 98.8228% ( 6) 00:09:06.616 14528.463 - 14633.741: 98.8513% ( 4) 00:09:06.616 14633.741 - 14739.020: 98.8799% ( 4) 00:09:06.617 14739.020 - 14844.299: 98.9013% ( 3) 00:09:06.617 14844.299 - 14949.578: 98.9298% ( 4) 00:09:06.617 14949.578 - 15054.856: 98.9512% ( 3) 00:09:06.617 15054.856 - 15160.135: 98.9726% ( 3) 00:09:06.617 15160.135 - 15265.414: 99.0011% ( 4) 00:09:06.617 15265.414 - 15370.692: 99.0225% ( 3) 00:09:06.617 15370.692 - 15475.971: 99.0511% ( 4) 00:09:06.617 15475.971 - 15581.250: 99.0725% ( 3) 00:09:06.617 15581.250 - 15686.529: 99.0868% ( 2) 00:09:06.617 24740.498 - 24845.777: 99.1082% ( 3) 00:09:06.617 24845.777 - 24951.055: 99.1510% ( 6) 00:09:06.617 24951.055 - 25056.334: 99.2009% ( 7) 00:09:06.617 25056.334 - 25161.613: 99.2509% ( 7) 00:09:06.617 25161.613 - 25266.892: 99.3008% ( 7) 00:09:06.617 25266.892 - 25372.170: 99.3507% ( 7) 00:09:06.617 25372.170 - 25477.449: 99.4007% ( 7) 00:09:06.617 25477.449 - 25582.728: 99.4506% ( 7) 00:09:06.617 25582.728 - 25688.006: 99.5006% ( 7) 00:09:06.617 25688.006 - 25793.285: 99.5434% ( 6) 00:09:06.617 30741.385 - 30951.942: 99.5648% ( 3) 00:09:06.617 30951.942 - 31162.500: 99.6575% ( 13) 00:09:06.617 31162.500 - 31373.057: 99.7432% ( 12) 00:09:06.617 31373.057 - 31583.614: 99.8430% ( 14) 00:09:06.617 31583.614 - 31794.172: 99.9429% ( 14) 00:09:06.617 31794.172 - 32004.729: 100.0000% ( 8) 00:09:06.617 00:09:06.617 00:13:21 nvme.nvme_perf -- nvme/nvme.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w write -o 12288 -t 1 -LL -i 0 00:09:07.996 Initializing NVMe Controllers 00:09:07.996 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:09:07.996 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:09:07.996 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:09:07.996 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:09:07.996 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:09:07.996 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:09:07.996 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:09:07.996 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:09:07.996 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:09:07.996 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:09:07.996 Initialization complete. Launching workers. 00:09:07.996 ======================================================== 00:09:07.996 Latency(us) 00:09:07.996 Device Information : IOPS MiB/s Average min max 00:09:07.996 PCIE (0000:00:10.0) NSID 1 from core 0: 10164.97 119.12 12607.14 8814.37 37785.11 00:09:07.996 PCIE (0000:00:11.0) NSID 1 from core 0: 10164.97 119.12 12599.18 8680.83 37700.34 00:09:07.996 PCIE (0000:00:13.0) NSID 1 from core 0: 10164.97 119.12 12590.60 8050.53 38111.03 00:09:07.996 PCIE (0000:00:12.0) NSID 1 from core 0: 10164.97 119.12 12581.48 7641.34 38093.54 00:09:07.996 PCIE (0000:00:12.0) NSID 2 from core 0: 10164.97 119.12 12571.58 7462.31 38075.33 00:09:07.996 PCIE (0000:00:12.0) NSID 3 from core 0: 10228.90 119.87 12484.90 6814.11 29686.42 00:09:07.996 ======================================================== 00:09:07.996 Total : 61053.76 715.47 12572.39 6814.11 38111.03 00:09:07.996 00:09:07.996 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:09:07.996 ================================================================================= 00:09:07.996 1.00000% : 9211.888us 00:09:07.996 10.00000% : 9633.002us 00:09:07.996 25.00000% : 10054.117us 00:09:07.996 50.00000% : 11212.183us 00:09:07.996 75.00000% : 14633.741us 00:09:07.996 90.00000% : 17370.988us 00:09:07.996 95.00000% : 18213.218us 00:09:07.996 98.00000% : 19687.120us 00:09:07.996 99.00000% : 26846.072us 00:09:07.996 99.50000% : 36215.878us 00:09:07.996 99.90000% : 37479.222us 00:09:07.996 99.99000% : 37900.337us 00:09:07.996 99.99900% : 37900.337us 00:09:07.996 99.99990% : 37900.337us 00:09:07.996 99.99999% : 37900.337us 00:09:07.996 00:09:07.996 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:09:07.996 ================================================================================= 00:09:07.996 1.00000% : 9264.527us 00:09:07.996 10.00000% : 9685.642us 00:09:07.996 25.00000% : 10001.478us 00:09:07.996 50.00000% : 11159.544us 00:09:07.996 75.00000% : 14633.741us 00:09:07.996 90.00000% : 17476.267us 00:09:07.996 95.00000% : 18423.775us 00:09:07.996 98.00000% : 20108.235us 00:09:07.996 99.00000% : 27372.466us 00:09:07.996 99.50000% : 36215.878us 00:09:07.996 99.90000% : 37479.222us 00:09:07.996 99.99000% : 37689.780us 00:09:07.996 99.99900% : 37900.337us 00:09:07.996 99.99990% : 37900.337us 00:09:07.996 99.99999% : 37900.337us 00:09:07.996 00:09:07.996 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:09:07.996 ================================================================================= 00:09:07.996 1.00000% : 9264.527us 00:09:07.996 10.00000% : 9685.642us 00:09:07.996 25.00000% : 10054.117us 00:09:07.996 50.00000% : 11264.822us 00:09:07.996 75.00000% : 14739.020us 00:09:07.996 90.00000% : 16949.873us 00:09:07.996 95.00000% : 18107.939us 00:09:07.996 98.00000% : 19897.677us 00:09:07.996 99.00000% : 28004.138us 00:09:07.996 99.50000% : 36847.550us 00:09:07.996 99.90000% : 37900.337us 00:09:07.996 99.99000% : 38110.895us 00:09:07.996 99.99900% : 38321.452us 00:09:07.996 99.99990% : 38321.452us 00:09:07.996 99.99999% : 38321.452us 00:09:07.996 00:09:07.996 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:09:07.996 ================================================================================= 00:09:07.996 1.00000% : 9264.527us 00:09:07.996 10.00000% : 9685.642us 00:09:07.996 25.00000% : 10054.117us 00:09:07.996 50.00000% : 11264.822us 00:09:07.996 75.00000% : 14528.463us 00:09:07.996 90.00000% : 17055.152us 00:09:07.996 95.00000% : 17897.382us 00:09:07.996 98.00000% : 19476.562us 00:09:07.997 99.00000% : 28004.138us 00:09:07.997 99.50000% : 36847.550us 00:09:07.997 99.90000% : 37900.337us 00:09:07.997 99.99000% : 38110.895us 00:09:07.997 99.99900% : 38110.895us 00:09:07.997 99.99990% : 38110.895us 00:09:07.997 99.99999% : 38110.895us 00:09:07.997 00:09:07.997 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:09:07.997 ================================================================================= 00:09:07.997 1.00000% : 9264.527us 00:09:07.997 10.00000% : 9685.642us 00:09:07.997 25.00000% : 10054.117us 00:09:07.997 50.00000% : 11264.822us 00:09:07.997 75.00000% : 14317.905us 00:09:07.997 90.00000% : 17055.152us 00:09:07.997 95.00000% : 17792.103us 00:09:07.997 98.00000% : 19055.447us 00:09:07.997 99.00000% : 28004.138us 00:09:07.997 99.50000% : 36636.993us 00:09:07.997 99.90000% : 37900.337us 00:09:07.997 99.99000% : 38110.895us 00:09:07.997 99.99900% : 38110.895us 00:09:07.997 99.99990% : 38110.895us 00:09:07.997 99.99999% : 38110.895us 00:09:07.997 00:09:07.997 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:09:07.997 ================================================================================= 00:09:07.997 1.00000% : 9211.888us 00:09:07.997 10.00000% : 9685.642us 00:09:07.997 25.00000% : 10054.117us 00:09:07.997 50.00000% : 11212.183us 00:09:07.997 75.00000% : 14528.463us 00:09:07.997 90.00000% : 17160.431us 00:09:07.997 95.00000% : 18002.660us 00:09:07.997 98.00000% : 19266.005us 00:09:07.997 99.00000% : 19897.677us 00:09:07.997 99.50000% : 28425.253us 00:09:07.997 99.90000% : 29478.040us 00:09:07.997 99.99000% : 29688.598us 00:09:07.997 99.99900% : 29688.598us 00:09:07.997 99.99990% : 29688.598us 00:09:07.997 99.99999% : 29688.598us 00:09:07.997 00:09:07.997 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:09:07.997 ============================================================================== 00:09:07.997 Range in us Cumulative IO count 00:09:07.997 8790.773 - 8843.412: 0.0098% ( 1) 00:09:07.997 8843.412 - 8896.051: 0.0197% ( 1) 00:09:07.997 8896.051 - 8948.691: 0.0295% ( 1) 00:09:07.997 8948.691 - 9001.330: 0.1474% ( 12) 00:09:07.997 9001.330 - 9053.969: 0.2064% ( 6) 00:09:07.997 9053.969 - 9106.609: 0.3833% ( 18) 00:09:07.997 9106.609 - 9159.248: 0.6682% ( 29) 00:09:07.997 9159.248 - 9211.888: 1.0318% ( 37) 00:09:07.997 9211.888 - 9264.527: 1.6608% ( 64) 00:09:07.997 9264.527 - 9317.166: 2.6042% ( 96) 00:09:07.997 9317.166 - 9369.806: 3.6164% ( 103) 00:09:07.997 9369.806 - 9422.445: 4.7465% ( 115) 00:09:07.997 9422.445 - 9475.084: 5.9552% ( 123) 00:09:07.997 9475.084 - 9527.724: 7.2917% ( 136) 00:09:07.997 9527.724 - 9580.363: 8.9328% ( 167) 00:09:07.997 9580.363 - 9633.002: 10.4658% ( 156) 00:09:07.997 9633.002 - 9685.642: 12.1167% ( 168) 00:09:07.997 9685.642 - 9738.281: 14.1018% ( 202) 00:09:07.997 9738.281 - 9790.920: 15.8412% ( 177) 00:09:07.997 9790.920 - 9843.560: 18.0621% ( 226) 00:09:07.997 9843.560 - 9896.199: 20.3322% ( 231) 00:09:07.997 9896.199 - 9948.839: 22.5432% ( 225) 00:09:07.997 9948.839 - 10001.478: 24.9116% ( 241) 00:09:07.997 10001.478 - 10054.117: 27.1816% ( 231) 00:09:07.997 10054.117 - 10106.757: 29.1470% ( 200) 00:09:07.997 10106.757 - 10159.396: 30.5326% ( 141) 00:09:07.997 10159.396 - 10212.035: 31.6726% ( 116) 00:09:07.997 10212.035 - 10264.675: 32.8027% ( 115) 00:09:07.997 10264.675 - 10317.314: 34.2178% ( 144) 00:09:07.997 10317.314 - 10369.953: 35.2987% ( 110) 00:09:07.997 10369.953 - 10422.593: 36.5959% ( 132) 00:09:07.997 10422.593 - 10475.232: 37.7850% ( 121) 00:09:07.997 10475.232 - 10527.871: 38.9446% ( 118) 00:09:07.997 10527.871 - 10580.511: 40.0845% ( 116) 00:09:07.997 10580.511 - 10633.150: 41.3522% ( 129) 00:09:07.997 10633.150 - 10685.790: 42.3054% ( 97) 00:09:07.997 10685.790 - 10738.429: 43.3569% ( 107) 00:09:07.997 10738.429 - 10791.068: 44.3298% ( 99) 00:09:07.997 10791.068 - 10843.708: 45.2634% ( 95) 00:09:07.997 10843.708 - 10896.347: 46.0594% ( 81) 00:09:07.997 10896.347 - 10948.986: 47.0028% ( 96) 00:09:07.997 10948.986 - 11001.626: 47.7398% ( 75) 00:09:07.997 11001.626 - 11054.265: 48.3982% ( 67) 00:09:07.997 11054.265 - 11106.904: 48.9780% ( 59) 00:09:07.997 11106.904 - 11159.544: 49.8624% ( 90) 00:09:07.997 11159.544 - 11212.183: 50.5896% ( 74) 00:09:07.997 11212.183 - 11264.822: 51.1792% ( 60) 00:09:07.997 11264.822 - 11317.462: 51.6903% ( 52) 00:09:07.997 11317.462 - 11370.101: 52.2406% ( 56) 00:09:07.997 11370.101 - 11422.741: 52.5649% ( 33) 00:09:07.997 11422.741 - 11475.380: 52.9874% ( 43) 00:09:07.997 11475.380 - 11528.019: 53.3510% ( 37) 00:09:07.997 11528.019 - 11580.659: 53.7441% ( 40) 00:09:07.997 11580.659 - 11633.298: 54.1863% ( 45) 00:09:07.997 11633.298 - 11685.937: 54.7661% ( 59) 00:09:07.997 11685.937 - 11738.577: 55.3557% ( 60) 00:09:07.997 11738.577 - 11791.216: 55.8569% ( 51) 00:09:07.997 11791.216 - 11843.855: 56.2402% ( 39) 00:09:07.997 11843.855 - 11896.495: 56.5743% ( 34) 00:09:07.997 11896.495 - 11949.134: 56.8593% ( 29) 00:09:07.997 11949.134 - 12001.773: 57.0263% ( 17) 00:09:07.997 12001.773 - 12054.413: 57.1836% ( 16) 00:09:07.997 12054.413 - 12107.052: 57.5767% ( 40) 00:09:07.997 12107.052 - 12159.692: 57.9009% ( 33) 00:09:07.997 12159.692 - 12212.331: 58.1171% ( 22) 00:09:07.997 12212.331 - 12264.970: 58.3432% ( 23) 00:09:07.997 12264.970 - 12317.610: 58.5397% ( 20) 00:09:07.997 12317.610 - 12370.249: 58.8542% ( 32) 00:09:07.997 12370.249 - 12422.888: 59.2079% ( 36) 00:09:07.997 12422.888 - 12475.528: 59.3652% ( 16) 00:09:07.997 12475.528 - 12528.167: 59.5028% ( 14) 00:09:07.997 12528.167 - 12580.806: 59.7091% ( 21) 00:09:07.997 12580.806 - 12633.446: 60.0727% ( 37) 00:09:07.997 12633.446 - 12686.085: 60.3970% ( 33) 00:09:07.997 12686.085 - 12738.724: 60.7410% ( 35) 00:09:07.997 12738.724 - 12791.364: 61.0653% ( 33) 00:09:07.997 12791.364 - 12844.003: 61.3208% ( 26) 00:09:07.997 12844.003 - 12896.643: 61.7531% ( 44) 00:09:07.997 12896.643 - 12949.282: 62.3329% ( 59) 00:09:07.997 12949.282 - 13001.921: 62.9717% ( 65) 00:09:07.997 13001.921 - 13054.561: 63.4434% ( 48) 00:09:07.997 13054.561 - 13107.200: 63.9741% ( 54) 00:09:07.997 13107.200 - 13159.839: 64.5047% ( 54) 00:09:07.997 13159.839 - 13212.479: 65.0157% ( 52) 00:09:07.997 13212.479 - 13265.118: 65.6152% ( 61) 00:09:07.997 13265.118 - 13317.757: 66.2736% ( 67) 00:09:07.997 13317.757 - 13370.397: 66.8337% ( 57) 00:09:07.997 13370.397 - 13423.036: 67.4528% ( 63) 00:09:07.997 13423.036 - 13475.676: 67.9344% ( 49) 00:09:07.997 13475.676 - 13580.954: 69.4379% ( 153) 00:09:07.997 13580.954 - 13686.233: 70.2535% ( 83) 00:09:07.997 13686.233 - 13791.512: 70.8923% ( 65) 00:09:07.997 13791.512 - 13896.790: 71.4328% ( 55) 00:09:07.997 13896.790 - 14002.069: 71.9045% ( 48) 00:09:07.997 14002.069 - 14107.348: 72.4941% ( 60) 00:09:07.997 14107.348 - 14212.627: 72.8774% ( 39) 00:09:07.997 14212.627 - 14317.905: 73.2999% ( 43) 00:09:07.997 14317.905 - 14423.184: 73.7127% ( 42) 00:09:07.997 14423.184 - 14528.463: 74.4399% ( 74) 00:09:07.997 14528.463 - 14633.741: 75.2260% ( 80) 00:09:07.997 14633.741 - 14739.020: 76.3267% ( 112) 00:09:07.997 14739.020 - 14844.299: 77.4076% ( 110) 00:09:07.997 14844.299 - 14949.578: 78.2822% ( 89) 00:09:07.997 14949.578 - 15054.856: 79.0291% ( 76) 00:09:07.997 15054.856 - 15160.135: 79.6285% ( 61) 00:09:07.997 15160.135 - 15265.414: 80.1985% ( 58) 00:09:07.997 15265.414 - 15370.692: 80.8274% ( 64) 00:09:07.997 15370.692 - 15475.971: 81.6136% ( 80) 00:09:07.997 15475.971 - 15581.250: 81.9870% ( 38) 00:09:07.997 15581.250 - 15686.529: 82.4882% ( 51) 00:09:07.997 15686.529 - 15791.807: 82.9599% ( 48) 00:09:07.997 15791.807 - 15897.086: 83.4513% ( 50) 00:09:07.997 15897.086 - 16002.365: 83.9131% ( 47) 00:09:07.997 16002.365 - 16107.643: 84.3455% ( 44) 00:09:07.997 16107.643 - 16212.922: 84.8172% ( 48) 00:09:07.997 16212.922 - 16318.201: 85.2300% ( 42) 00:09:07.997 16318.201 - 16423.480: 85.7213% ( 50) 00:09:07.997 16423.480 - 16528.758: 86.1439% ( 43) 00:09:07.997 16528.758 - 16634.037: 86.6057% ( 47) 00:09:07.997 16634.037 - 16739.316: 87.0381% ( 44) 00:09:07.997 16739.316 - 16844.594: 87.5295% ( 50) 00:09:07.997 16844.594 - 16949.873: 88.0405% ( 52) 00:09:07.997 16949.873 - 17055.152: 88.6596% ( 63) 00:09:07.997 17055.152 - 17160.431: 89.2394% ( 59) 00:09:07.997 17160.431 - 17265.709: 89.9371% ( 71) 00:09:07.997 17265.709 - 17370.988: 90.3204% ( 39) 00:09:07.997 17370.988 - 17476.267: 90.8314% ( 52) 00:09:07.997 17476.267 - 17581.545: 91.5389% ( 72) 00:09:07.997 17581.545 - 17686.824: 92.2072% ( 68) 00:09:07.997 17686.824 - 17792.103: 92.7673% ( 57) 00:09:07.997 17792.103 - 17897.382: 93.3176% ( 56) 00:09:07.997 17897.382 - 18002.660: 93.9858% ( 68) 00:09:07.997 18002.660 - 18107.939: 94.6443% ( 67) 00:09:07.997 18107.939 - 18213.218: 95.1061% ( 47) 00:09:07.997 18213.218 - 18318.496: 95.5189% ( 42) 00:09:07.997 18318.496 - 18423.775: 95.8432% ( 33) 00:09:07.997 18423.775 - 18529.054: 96.1675% ( 33) 00:09:07.997 18529.054 - 18634.333: 96.5802% ( 42) 00:09:07.997 18634.333 - 18739.611: 96.9143% ( 34) 00:09:07.997 18739.611 - 18844.890: 97.1600% ( 25) 00:09:07.997 18844.890 - 18950.169: 97.2288% ( 7) 00:09:07.997 18950.169 - 19055.447: 97.2779% ( 5) 00:09:07.997 19055.447 - 19160.726: 97.4155% ( 14) 00:09:07.997 19160.726 - 19266.005: 97.5334% ( 12) 00:09:07.997 19266.005 - 19371.284: 97.6513% ( 12) 00:09:07.997 19371.284 - 19476.562: 97.7889% ( 14) 00:09:07.997 19476.562 - 19581.841: 97.9461% ( 16) 00:09:07.998 19581.841 - 19687.120: 98.0248% ( 8) 00:09:07.998 19687.120 - 19792.398: 98.1918% ( 17) 00:09:07.998 19792.398 - 19897.677: 98.3687% ( 18) 00:09:07.998 19897.677 - 20002.956: 98.5063% ( 14) 00:09:07.998 20002.956 - 20108.235: 98.5554% ( 5) 00:09:07.998 20108.235 - 20213.513: 98.5751% ( 2) 00:09:07.998 20213.513 - 20318.792: 98.6046% ( 3) 00:09:07.998 20318.792 - 20424.071: 98.6439% ( 4) 00:09:07.998 20424.071 - 20529.349: 98.7225% ( 8) 00:09:07.998 20529.349 - 20634.628: 98.7421% ( 2) 00:09:07.998 26214.400 - 26319.679: 98.7520% ( 1) 00:09:07.998 26424.957 - 26530.236: 98.8404% ( 9) 00:09:07.998 26530.236 - 26635.515: 98.9485% ( 11) 00:09:07.998 26635.515 - 26740.794: 98.9780% ( 3) 00:09:07.998 26740.794 - 26846.072: 99.0173% ( 4) 00:09:07.998 26846.072 - 26951.351: 99.0566% ( 4) 00:09:07.998 26951.351 - 27161.908: 99.1647% ( 11) 00:09:07.998 27161.908 - 27372.466: 99.2826% ( 12) 00:09:07.998 27372.466 - 27583.023: 99.3711% ( 9) 00:09:07.998 35373.648 - 35584.206: 99.3809% ( 1) 00:09:07.998 35584.206 - 35794.763: 99.4300% ( 5) 00:09:07.998 35794.763 - 36005.320: 99.4792% ( 5) 00:09:07.998 36005.320 - 36215.878: 99.5381% ( 6) 00:09:07.998 36215.878 - 36426.435: 99.5971% ( 6) 00:09:07.998 36426.435 - 36636.993: 99.6659% ( 7) 00:09:07.998 36636.993 - 36847.550: 99.7150% ( 5) 00:09:07.998 36847.550 - 37058.108: 99.7838% ( 7) 00:09:07.998 37058.108 - 37268.665: 99.8526% ( 7) 00:09:07.998 37268.665 - 37479.222: 99.9017% ( 5) 00:09:07.998 37479.222 - 37689.780: 99.9705% ( 7) 00:09:07.998 37689.780 - 37900.337: 100.0000% ( 3) 00:09:07.998 00:09:07.998 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:09:07.998 ============================================================================== 00:09:07.998 Range in us Cumulative IO count 00:09:07.998 8632.855 - 8685.494: 0.0098% ( 1) 00:09:07.998 9001.330 - 9053.969: 0.0590% ( 5) 00:09:07.998 9053.969 - 9106.609: 0.1474% ( 9) 00:09:07.998 9106.609 - 9159.248: 0.3243% ( 18) 00:09:07.998 9159.248 - 9211.888: 0.6584% ( 34) 00:09:07.998 9211.888 - 9264.527: 1.1891% ( 54) 00:09:07.998 9264.527 - 9317.166: 1.9064% ( 73) 00:09:07.998 9317.166 - 9369.806: 2.6828% ( 79) 00:09:07.998 9369.806 - 9422.445: 3.5476% ( 88) 00:09:07.998 9422.445 - 9475.084: 4.9627% ( 144) 00:09:07.998 9475.084 - 9527.724: 6.4072% ( 147) 00:09:07.998 9527.724 - 9580.363: 7.9501% ( 157) 00:09:07.998 9580.363 - 9633.002: 9.9057% ( 199) 00:09:07.998 9633.002 - 9685.642: 12.0873% ( 222) 00:09:07.998 9685.642 - 9738.281: 14.4064% ( 236) 00:09:07.998 9738.281 - 9790.920: 16.6961% ( 233) 00:09:07.998 9790.920 - 9843.560: 18.9957% ( 234) 00:09:07.998 9843.560 - 9896.199: 21.1478% ( 219) 00:09:07.998 9896.199 - 9948.839: 23.2803% ( 217) 00:09:07.998 9948.839 - 10001.478: 25.2358% ( 199) 00:09:07.998 10001.478 - 10054.117: 27.1030% ( 190) 00:09:07.998 10054.117 - 10106.757: 29.0094% ( 194) 00:09:07.998 10106.757 - 10159.396: 31.1222% ( 215) 00:09:07.998 10159.396 - 10212.035: 32.9304% ( 184) 00:09:07.998 10212.035 - 10264.675: 34.3357% ( 143) 00:09:07.998 10264.675 - 10317.314: 35.6820% ( 137) 00:09:07.998 10317.314 - 10369.953: 36.9202% ( 126) 00:09:07.998 10369.953 - 10422.593: 38.1093% ( 121) 00:09:07.998 10422.593 - 10475.232: 39.0330% ( 94) 00:09:07.998 10475.232 - 10527.871: 39.9666% ( 95) 00:09:07.998 10527.871 - 10580.511: 40.8510% ( 90) 00:09:07.998 10580.511 - 10633.150: 41.8632% ( 103) 00:09:07.998 10633.150 - 10685.790: 42.8557% ( 101) 00:09:07.998 10685.790 - 10738.429: 43.7991% ( 96) 00:09:07.998 10738.429 - 10791.068: 44.7327% ( 95) 00:09:07.998 10791.068 - 10843.708: 45.7351% ( 102) 00:09:07.998 10843.708 - 10896.347: 46.7669% ( 105) 00:09:07.998 10896.347 - 10948.986: 47.6906% ( 94) 00:09:07.998 10948.986 - 11001.626: 48.5063% ( 83) 00:09:07.998 11001.626 - 11054.265: 49.2630% ( 77) 00:09:07.998 11054.265 - 11106.904: 49.9214% ( 67) 00:09:07.998 11106.904 - 11159.544: 50.4914% ( 58) 00:09:07.998 11159.544 - 11212.183: 50.9827% ( 50) 00:09:07.998 11212.183 - 11264.822: 51.6116% ( 64) 00:09:07.998 11264.822 - 11317.462: 52.3683% ( 77) 00:09:07.998 11317.462 - 11370.101: 52.9285% ( 57) 00:09:07.998 11370.101 - 11422.741: 53.5279% ( 61) 00:09:07.998 11422.741 - 11475.380: 53.9112% ( 39) 00:09:07.998 11475.380 - 11528.019: 54.1765% ( 27) 00:09:07.998 11528.019 - 11580.659: 54.4123% ( 24) 00:09:07.998 11580.659 - 11633.298: 54.6482% ( 24) 00:09:07.998 11633.298 - 11685.937: 54.9430% ( 30) 00:09:07.998 11685.937 - 11738.577: 55.3066% ( 37) 00:09:07.998 11738.577 - 11791.216: 55.8176% ( 52) 00:09:07.998 11791.216 - 11843.855: 56.2697% ( 46) 00:09:07.998 11843.855 - 11896.495: 56.8298% ( 57) 00:09:07.998 11896.495 - 11949.134: 57.3998% ( 58) 00:09:07.998 11949.134 - 12001.773: 57.8027% ( 41) 00:09:07.998 12001.773 - 12054.413: 58.2645% ( 47) 00:09:07.998 12054.413 - 12107.052: 58.6969% ( 44) 00:09:07.998 12107.052 - 12159.692: 59.0507% ( 36) 00:09:07.998 12159.692 - 12212.331: 59.3848% ( 34) 00:09:07.998 12212.331 - 12264.970: 59.6698% ( 29) 00:09:07.998 12264.970 - 12317.610: 59.8762% ( 21) 00:09:07.998 12317.610 - 12370.249: 60.1219% ( 25) 00:09:07.998 12370.249 - 12422.888: 60.3381% ( 22) 00:09:07.998 12422.888 - 12475.528: 60.6623% ( 33) 00:09:07.998 12475.528 - 12528.167: 61.0161% ( 36) 00:09:07.998 12528.167 - 12580.806: 61.4289% ( 42) 00:09:07.998 12580.806 - 12633.446: 61.8121% ( 39) 00:09:07.998 12633.446 - 12686.085: 62.1855% ( 38) 00:09:07.998 12686.085 - 12738.724: 62.6179% ( 44) 00:09:07.998 12738.724 - 12791.364: 63.0307% ( 42) 00:09:07.998 12791.364 - 12844.003: 63.3746% ( 35) 00:09:07.998 12844.003 - 12896.643: 63.8365% ( 47) 00:09:07.998 12896.643 - 12949.282: 64.2689% ( 44) 00:09:07.998 12949.282 - 13001.921: 64.8388% ( 58) 00:09:07.998 13001.921 - 13054.561: 65.2909% ( 46) 00:09:07.998 13054.561 - 13107.200: 65.5660% ( 28) 00:09:07.998 13107.200 - 13159.839: 65.8903% ( 33) 00:09:07.998 13159.839 - 13212.479: 66.2736% ( 39) 00:09:07.998 13212.479 - 13265.118: 66.6372% ( 37) 00:09:07.998 13265.118 - 13317.757: 67.0008% ( 37) 00:09:07.998 13317.757 - 13370.397: 67.3153% ( 32) 00:09:07.998 13370.397 - 13423.036: 67.6101% ( 30) 00:09:07.998 13423.036 - 13475.676: 67.9049% ( 30) 00:09:07.998 13475.676 - 13580.954: 68.4945% ( 60) 00:09:07.998 13580.954 - 13686.233: 69.0350% ( 55) 00:09:07.998 13686.233 - 13791.512: 69.7131% ( 69) 00:09:07.998 13791.512 - 13896.790: 70.3223% ( 62) 00:09:07.998 13896.790 - 14002.069: 71.0102% ( 70) 00:09:07.998 14002.069 - 14107.348: 71.7866% ( 79) 00:09:07.998 14107.348 - 14212.627: 72.9756% ( 121) 00:09:07.998 14212.627 - 14317.905: 73.7814% ( 82) 00:09:07.998 14317.905 - 14423.184: 74.4202% ( 65) 00:09:07.998 14423.184 - 14528.463: 74.9017% ( 49) 00:09:07.998 14528.463 - 14633.741: 75.2162% ( 32) 00:09:07.998 14633.741 - 14739.020: 75.5503% ( 34) 00:09:07.998 14739.020 - 14844.299: 75.9631% ( 42) 00:09:07.998 14844.299 - 14949.578: 76.6116% ( 66) 00:09:07.998 14949.578 - 15054.856: 77.3487% ( 75) 00:09:07.998 15054.856 - 15160.135: 78.1053% ( 77) 00:09:07.998 15160.135 - 15265.414: 78.7539% ( 66) 00:09:07.998 15265.414 - 15370.692: 79.6482% ( 91) 00:09:07.998 15370.692 - 15475.971: 80.6702% ( 104) 00:09:07.998 15475.971 - 15581.250: 81.8888% ( 124) 00:09:07.998 15581.250 - 15686.529: 83.2645% ( 140) 00:09:07.998 15686.529 - 15791.807: 84.4340% ( 119) 00:09:07.998 15791.807 - 15897.086: 85.1513% ( 73) 00:09:07.998 15897.086 - 16002.365: 85.7901% ( 65) 00:09:07.998 16002.365 - 16107.643: 86.2323% ( 45) 00:09:07.998 16107.643 - 16212.922: 86.5075% ( 28) 00:09:07.998 16212.922 - 16318.201: 86.7433% ( 24) 00:09:07.998 16318.201 - 16423.480: 86.9497% ( 21) 00:09:07.998 16423.480 - 16528.758: 87.1364% ( 19) 00:09:07.998 16528.758 - 16634.037: 87.3428% ( 21) 00:09:07.998 16634.037 - 16739.316: 87.5590% ( 22) 00:09:07.998 16739.316 - 16844.594: 87.9029% ( 35) 00:09:07.998 16844.594 - 16949.873: 88.2370% ( 34) 00:09:07.998 16949.873 - 17055.152: 88.6203% ( 39) 00:09:07.998 17055.152 - 17160.431: 88.9937% ( 38) 00:09:07.999 17160.431 - 17265.709: 89.4064% ( 42) 00:09:07.999 17265.709 - 17370.988: 89.8781% ( 48) 00:09:07.999 17370.988 - 17476.267: 90.4678% ( 60) 00:09:07.999 17476.267 - 17581.545: 90.9296% ( 47) 00:09:07.999 17581.545 - 17686.824: 91.4210% ( 50) 00:09:07.999 17686.824 - 17792.103: 91.9123% ( 50) 00:09:07.999 17792.103 - 17897.382: 92.4528% ( 55) 00:09:07.999 17897.382 - 18002.660: 92.9737% ( 53) 00:09:07.999 18002.660 - 18107.939: 93.4355% ( 47) 00:09:07.999 18107.939 - 18213.218: 93.9269% ( 50) 00:09:07.999 18213.218 - 18318.496: 94.5755% ( 66) 00:09:07.999 18318.496 - 18423.775: 95.0570% ( 49) 00:09:07.999 18423.775 - 18529.054: 95.5483% ( 50) 00:09:07.999 18529.054 - 18634.333: 95.9513% ( 41) 00:09:07.999 18634.333 - 18739.611: 96.3345% ( 39) 00:09:07.999 18739.611 - 18844.890: 96.6293% ( 30) 00:09:07.999 18844.890 - 18950.169: 96.8259% ( 20) 00:09:07.999 18950.169 - 19055.447: 97.0421% ( 22) 00:09:07.999 19055.447 - 19160.726: 97.2091% ( 17) 00:09:07.999 19160.726 - 19266.005: 97.2877% ( 8) 00:09:07.999 19266.005 - 19371.284: 97.4155% ( 13) 00:09:07.999 19371.284 - 19476.562: 97.5334% ( 12) 00:09:07.999 19476.562 - 19581.841: 97.7005% ( 17) 00:09:07.999 19581.841 - 19687.120: 97.7300% ( 3) 00:09:07.999 19687.120 - 19792.398: 97.7496% ( 2) 00:09:07.999 19792.398 - 19897.677: 97.7693% ( 2) 00:09:07.999 19897.677 - 20002.956: 97.9265% ( 16) 00:09:07.999 20002.956 - 20108.235: 98.0051% ( 8) 00:09:07.999 20108.235 - 20213.513: 98.0936% ( 9) 00:09:07.999 20213.513 - 20318.792: 98.2115% ( 12) 00:09:07.999 20318.792 - 20424.071: 98.3196% ( 11) 00:09:07.999 20424.071 - 20529.349: 98.4277% ( 11) 00:09:07.999 20529.349 - 20634.628: 98.5161% ( 9) 00:09:07.999 20634.628 - 20739.907: 98.5947% ( 8) 00:09:07.999 20739.907 - 20845.186: 98.6537% ( 6) 00:09:07.999 20845.186 - 20950.464: 98.7127% ( 6) 00:09:07.999 20950.464 - 21055.743: 98.7421% ( 3) 00:09:07.999 26424.957 - 26530.236: 98.7520% ( 1) 00:09:07.999 26530.236 - 26635.515: 98.7814% ( 3) 00:09:07.999 26635.515 - 26740.794: 98.8109% ( 3) 00:09:07.999 26740.794 - 26846.072: 98.8502% ( 4) 00:09:07.999 26846.072 - 26951.351: 98.8797% ( 3) 00:09:07.999 26951.351 - 27161.908: 98.9485% ( 7) 00:09:07.999 27161.908 - 27372.466: 99.0271% ( 8) 00:09:07.999 27372.466 - 27583.023: 99.0959% ( 7) 00:09:07.999 27583.023 - 27793.581: 99.1647% ( 7) 00:09:07.999 27793.581 - 28004.138: 99.2335% ( 7) 00:09:07.999 28004.138 - 28214.696: 99.3023% ( 7) 00:09:07.999 28214.696 - 28425.253: 99.3612% ( 6) 00:09:07.999 28425.253 - 28635.810: 99.3711% ( 1) 00:09:07.999 35794.763 - 36005.320: 99.4300% ( 6) 00:09:07.999 36005.320 - 36215.878: 99.5086% ( 8) 00:09:07.999 36215.878 - 36426.435: 99.5774% ( 7) 00:09:07.999 36426.435 - 36636.993: 99.6561% ( 8) 00:09:07.999 36636.993 - 36847.550: 99.7150% ( 6) 00:09:07.999 36847.550 - 37058.108: 99.7838% ( 7) 00:09:07.999 37058.108 - 37268.665: 99.8526% ( 7) 00:09:07.999 37268.665 - 37479.222: 99.9214% ( 7) 00:09:07.999 37479.222 - 37689.780: 99.9902% ( 7) 00:09:07.999 37689.780 - 37900.337: 100.0000% ( 1) 00:09:07.999 00:09:07.999 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:09:07.999 ============================================================================== 00:09:07.999 Range in us Cumulative IO count 00:09:07.999 8001.182 - 8053.822: 0.0098% ( 1) 00:09:07.999 8053.822 - 8106.461: 0.0590% ( 5) 00:09:07.999 8106.461 - 8159.100: 0.0983% ( 4) 00:09:07.999 8159.100 - 8211.740: 0.1671% ( 7) 00:09:07.999 8211.740 - 8264.379: 0.2752% ( 11) 00:09:07.999 8264.379 - 8317.018: 0.4422% ( 17) 00:09:07.999 8317.018 - 8369.658: 0.4717% ( 3) 00:09:07.999 8369.658 - 8422.297: 0.4914% ( 2) 00:09:07.999 8422.297 - 8474.937: 0.5110% ( 2) 00:09:07.999 8474.937 - 8527.576: 0.5307% ( 2) 00:09:07.999 8527.576 - 8580.215: 0.5503% ( 2) 00:09:07.999 8580.215 - 8632.855: 0.5601% ( 1) 00:09:07.999 8632.855 - 8685.494: 0.5798% ( 2) 00:09:07.999 8685.494 - 8738.133: 0.5994% ( 2) 00:09:07.999 8738.133 - 8790.773: 0.6191% ( 2) 00:09:07.999 8790.773 - 8843.412: 0.6289% ( 1) 00:09:07.999 9053.969 - 9106.609: 0.6879% ( 6) 00:09:07.999 9106.609 - 9159.248: 0.7665% ( 8) 00:09:07.999 9159.248 - 9211.888: 0.9237% ( 16) 00:09:07.999 9211.888 - 9264.527: 1.1694% ( 25) 00:09:07.999 9264.527 - 9317.166: 1.7983% ( 64) 00:09:07.999 9317.166 - 9369.806: 2.6631% ( 88) 00:09:07.999 9369.806 - 9422.445: 3.6065% ( 96) 00:09:07.999 9422.445 - 9475.084: 4.7366% ( 115) 00:09:07.999 9475.084 - 9527.724: 6.0142% ( 130) 00:09:07.999 9527.724 - 9580.363: 7.4587% ( 147) 00:09:07.999 9580.363 - 9633.002: 9.1981% ( 177) 00:09:07.999 9633.002 - 9685.642: 10.8687% ( 170) 00:09:07.999 9685.642 - 9738.281: 13.0012% ( 217) 00:09:07.999 9738.281 - 9790.920: 15.1435% ( 218) 00:09:07.999 9790.920 - 9843.560: 16.8632% ( 175) 00:09:07.999 9843.560 - 9896.199: 19.0939% ( 227) 00:09:07.999 9896.199 - 9948.839: 21.1380% ( 208) 00:09:07.999 9948.839 - 10001.478: 23.2311% ( 213) 00:09:07.999 10001.478 - 10054.117: 25.4520% ( 226) 00:09:07.999 10054.117 - 10106.757: 27.1521% ( 173) 00:09:07.999 10106.757 - 10159.396: 28.8620% ( 174) 00:09:07.999 10159.396 - 10212.035: 30.7095% ( 188) 00:09:07.999 10212.035 - 10264.675: 32.3801% ( 170) 00:09:07.999 10264.675 - 10317.314: 34.1293% ( 178) 00:09:07.999 10317.314 - 10369.953: 35.8196% ( 172) 00:09:07.999 10369.953 - 10422.593: 37.1757% ( 138) 00:09:07.999 10422.593 - 10475.232: 38.5515% ( 140) 00:09:07.999 10475.232 - 10527.871: 39.7504% ( 122) 00:09:07.999 10527.871 - 10580.511: 40.7822% ( 105) 00:09:07.999 10580.511 - 10633.150: 42.0303% ( 127) 00:09:07.999 10633.150 - 10685.790: 42.8852% ( 87) 00:09:07.999 10685.790 - 10738.429: 43.6321% ( 76) 00:09:07.999 10738.429 - 10791.068: 44.3101% ( 69) 00:09:07.999 10791.068 - 10843.708: 44.8015% ( 50) 00:09:07.999 10843.708 - 10896.347: 45.4599% ( 67) 00:09:07.999 10896.347 - 10948.986: 46.1773% ( 73) 00:09:07.999 10948.986 - 11001.626: 46.7964% ( 63) 00:09:07.999 11001.626 - 11054.265: 47.5334% ( 75) 00:09:07.999 11054.265 - 11106.904: 48.4572% ( 94) 00:09:07.999 11106.904 - 11159.544: 49.1745% ( 73) 00:09:07.999 11159.544 - 11212.183: 49.7150% ( 55) 00:09:07.999 11212.183 - 11264.822: 50.6977% ( 100) 00:09:07.999 11264.822 - 11317.462: 51.4741% ( 79) 00:09:07.999 11317.462 - 11370.101: 52.0047% ( 54) 00:09:07.999 11370.101 - 11422.741: 52.6042% ( 61) 00:09:07.999 11422.741 - 11475.380: 53.3412% ( 75) 00:09:07.999 11475.380 - 11528.019: 53.9013% ( 57) 00:09:07.999 11528.019 - 11580.659: 54.5204% ( 63) 00:09:07.999 11580.659 - 11633.298: 55.2869% ( 78) 00:09:07.999 11633.298 - 11685.937: 55.8471% ( 57) 00:09:07.999 11685.937 - 11738.577: 56.3679% ( 53) 00:09:07.999 11738.577 - 11791.216: 56.8396% ( 48) 00:09:07.999 11791.216 - 11843.855: 57.1737% ( 34) 00:09:07.999 11843.855 - 11896.495: 57.5767% ( 41) 00:09:07.999 11896.495 - 11949.134: 57.8420% ( 27) 00:09:07.999 11949.134 - 12001.773: 58.1368% ( 30) 00:09:07.999 12001.773 - 12054.413: 58.3333% ( 20) 00:09:07.999 12054.413 - 12107.052: 58.5594% ( 23) 00:09:07.999 12107.052 - 12159.692: 58.8542% ( 30) 00:09:07.999 12159.692 - 12212.331: 59.1293% ( 28) 00:09:07.999 12212.331 - 12264.970: 59.4340% ( 31) 00:09:07.999 12264.970 - 12317.610: 59.7877% ( 36) 00:09:07.999 12317.610 - 12370.249: 60.2103% ( 43) 00:09:07.999 12370.249 - 12422.888: 60.5444% ( 34) 00:09:07.999 12422.888 - 12475.528: 61.1439% ( 61) 00:09:07.999 12475.528 - 12528.167: 61.7335% ( 60) 00:09:07.999 12528.167 - 12580.806: 62.1364% ( 41) 00:09:07.999 12580.806 - 12633.446: 62.4705% ( 34) 00:09:07.999 12633.446 - 12686.085: 62.8145% ( 35) 00:09:07.999 12686.085 - 12738.724: 63.0994% ( 29) 00:09:07.999 12738.724 - 12791.364: 63.4041% ( 31) 00:09:07.999 12791.364 - 12844.003: 63.8463% ( 45) 00:09:07.999 12844.003 - 12896.643: 64.1903% ( 35) 00:09:07.999 12896.643 - 12949.282: 64.5145% ( 33) 00:09:07.999 12949.282 - 13001.921: 65.0059% ( 50) 00:09:07.999 13001.921 - 13054.561: 65.4186% ( 42) 00:09:07.999 13054.561 - 13107.200: 65.8608% ( 45) 00:09:07.999 13107.200 - 13159.839: 66.1950% ( 34) 00:09:07.999 13159.839 - 13212.479: 66.5979% ( 41) 00:09:07.999 13212.479 - 13265.118: 66.9713% ( 38) 00:09:07.999 13265.118 - 13317.757: 67.3349% ( 37) 00:09:07.999 13317.757 - 13370.397: 67.7083% ( 38) 00:09:07.999 13370.397 - 13423.036: 68.1407% ( 44) 00:09:07.999 13423.036 - 13475.676: 68.7205% ( 59) 00:09:07.999 13475.676 - 13580.954: 69.7327% ( 103) 00:09:07.999 13580.954 - 13686.233: 70.4697% ( 75) 00:09:07.999 13686.233 - 13791.512: 71.0102% ( 55) 00:09:07.999 13791.512 - 13896.790: 71.6785% ( 68) 00:09:07.999 13896.790 - 14002.069: 72.2386% ( 57) 00:09:07.999 14002.069 - 14107.348: 72.6219% ( 39) 00:09:07.999 14107.348 - 14212.627: 72.9756% ( 36) 00:09:07.999 14212.627 - 14317.905: 73.4178% ( 45) 00:09:07.999 14317.905 - 14423.184: 73.8404% ( 43) 00:09:07.999 14423.184 - 14528.463: 74.2040% ( 37) 00:09:07.999 14528.463 - 14633.741: 74.6364% ( 44) 00:09:07.999 14633.741 - 14739.020: 75.2457% ( 62) 00:09:07.999 14739.020 - 14844.299: 76.1989% ( 97) 00:09:07.999 14844.299 - 14949.578: 77.0145% ( 83) 00:09:07.999 14949.578 - 15054.856: 77.6336% ( 63) 00:09:07.999 15054.856 - 15160.135: 78.4395% ( 82) 00:09:07.999 15160.135 - 15265.414: 79.1863% ( 76) 00:09:07.999 15265.414 - 15370.692: 79.6384% ( 46) 00:09:07.999 15370.692 - 15475.971: 80.0216% ( 39) 00:09:07.999 15475.971 - 15581.250: 80.4540% ( 44) 00:09:07.999 15581.250 - 15686.529: 80.8373% ( 39) 00:09:07.999 15686.529 - 15791.807: 81.2697% ( 44) 00:09:08.000 15791.807 - 15897.086: 81.9182% ( 66) 00:09:08.000 15897.086 - 16002.365: 82.8322% ( 93) 00:09:08.000 16002.365 - 16107.643: 83.7461% ( 93) 00:09:08.000 16107.643 - 16212.922: 84.5617% ( 83) 00:09:08.000 16212.922 - 16318.201: 85.6329% ( 109) 00:09:08.000 16318.201 - 16423.480: 86.4387% ( 82) 00:09:08.000 16423.480 - 16528.758: 87.4902% ( 107) 00:09:08.000 16528.758 - 16634.037: 88.5220% ( 105) 00:09:08.000 16634.037 - 16739.316: 89.1608% ( 65) 00:09:08.000 16739.316 - 16844.594: 89.7111% ( 56) 00:09:08.000 16844.594 - 16949.873: 90.1828% ( 48) 00:09:08.000 16949.873 - 17055.152: 90.5071% ( 33) 00:09:08.000 17055.152 - 17160.431: 90.8805% ( 38) 00:09:08.000 17160.431 - 17265.709: 91.2834% ( 41) 00:09:08.000 17265.709 - 17370.988: 91.7846% ( 51) 00:09:08.000 17370.988 - 17476.267: 92.4233% ( 65) 00:09:08.000 17476.267 - 17581.545: 92.9933% ( 58) 00:09:08.000 17581.545 - 17686.824: 93.3864% ( 40) 00:09:08.000 17686.824 - 17792.103: 93.8090% ( 43) 00:09:08.000 17792.103 - 17897.382: 94.1824% ( 38) 00:09:08.000 17897.382 - 18002.660: 94.5755% ( 40) 00:09:08.000 18002.660 - 18107.939: 95.0079% ( 44) 00:09:08.000 18107.939 - 18213.218: 95.4403% ( 44) 00:09:08.000 18213.218 - 18318.496: 95.7252% ( 29) 00:09:08.000 18318.496 - 18423.775: 96.0397% ( 32) 00:09:08.000 18423.775 - 18529.054: 96.3149% ( 28) 00:09:08.000 18529.054 - 18634.333: 96.6195% ( 31) 00:09:08.000 18634.333 - 18739.611: 96.8455% ( 23) 00:09:08.000 18739.611 - 18844.890: 97.0028% ( 16) 00:09:08.000 18844.890 - 18950.169: 97.1108% ( 11) 00:09:08.000 18950.169 - 19055.447: 97.2779% ( 17) 00:09:08.000 19055.447 - 19160.726: 97.4450% ( 17) 00:09:08.000 19160.726 - 19266.005: 97.6219% ( 18) 00:09:08.000 19266.005 - 19371.284: 97.7005% ( 8) 00:09:08.000 19371.284 - 19476.562: 97.7791% ( 8) 00:09:08.000 19476.562 - 19581.841: 97.7987% ( 2) 00:09:08.000 19581.841 - 19687.120: 97.8872% ( 9) 00:09:08.000 19687.120 - 19792.398: 97.9855% ( 10) 00:09:08.000 19792.398 - 19897.677: 98.0936% ( 11) 00:09:08.000 19897.677 - 20002.956: 98.2017% ( 11) 00:09:08.000 20002.956 - 20108.235: 98.3097% ( 11) 00:09:08.000 20108.235 - 20213.513: 98.4080% ( 10) 00:09:08.000 20213.513 - 20318.792: 98.5161% ( 11) 00:09:08.000 20318.792 - 20424.071: 98.5751% ( 6) 00:09:08.000 20424.071 - 20529.349: 98.6242% ( 5) 00:09:08.000 20529.349 - 20634.628: 98.6733% ( 5) 00:09:08.000 20634.628 - 20739.907: 98.7323% ( 6) 00:09:08.000 20739.907 - 20845.186: 98.7421% ( 1) 00:09:08.000 26951.351 - 27161.908: 98.7618% ( 2) 00:09:08.000 27161.908 - 27372.466: 98.8306% ( 7) 00:09:08.000 27372.466 - 27583.023: 98.8994% ( 7) 00:09:08.000 27583.023 - 27793.581: 98.9583% ( 6) 00:09:08.000 27793.581 - 28004.138: 99.0271% ( 7) 00:09:08.000 28004.138 - 28214.696: 99.0959% ( 7) 00:09:08.000 28214.696 - 28425.253: 99.1647% ( 7) 00:09:08.000 28425.253 - 28635.810: 99.2433% ( 8) 00:09:08.000 28635.810 - 28846.368: 99.3121% ( 7) 00:09:08.000 28846.368 - 29056.925: 99.3711% ( 6) 00:09:08.000 36215.878 - 36426.435: 99.4300% ( 6) 00:09:08.000 36426.435 - 36636.993: 99.4988% ( 7) 00:09:08.000 36636.993 - 36847.550: 99.5676% ( 7) 00:09:08.000 36847.550 - 37058.108: 99.6364% ( 7) 00:09:08.000 37058.108 - 37268.665: 99.7150% ( 8) 00:09:08.000 37268.665 - 37479.222: 99.7838% ( 7) 00:09:08.000 37479.222 - 37689.780: 99.8428% ( 6) 00:09:08.000 37689.780 - 37900.337: 99.9214% ( 8) 00:09:08.000 37900.337 - 38110.895: 99.9902% ( 7) 00:09:08.000 38110.895 - 38321.452: 100.0000% ( 1) 00:09:08.000 00:09:08.000 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:09:08.000 ============================================================================== 00:09:08.000 Range in us Cumulative IO count 00:09:08.000 7632.707 - 7685.346: 0.0098% ( 1) 00:09:08.000 7790.625 - 7843.264: 0.0295% ( 2) 00:09:08.000 7843.264 - 7895.904: 0.0884% ( 6) 00:09:08.000 7895.904 - 7948.543: 0.1179% ( 3) 00:09:08.000 7948.543 - 8001.182: 0.1671% ( 5) 00:09:08.000 8001.182 - 8053.822: 0.3439% ( 18) 00:09:08.000 8053.822 - 8106.461: 0.4226% ( 8) 00:09:08.000 8106.461 - 8159.100: 0.4422% ( 2) 00:09:08.000 8159.100 - 8211.740: 0.4717% ( 3) 00:09:08.000 8211.740 - 8264.379: 0.4914% ( 2) 00:09:08.000 8264.379 - 8317.018: 0.5110% ( 2) 00:09:08.000 8317.018 - 8369.658: 0.5307% ( 2) 00:09:08.000 8369.658 - 8422.297: 0.5601% ( 3) 00:09:08.000 8422.297 - 8474.937: 0.5798% ( 2) 00:09:08.000 8474.937 - 8527.576: 0.5994% ( 2) 00:09:08.000 8527.576 - 8580.215: 0.6191% ( 2) 00:09:08.000 8580.215 - 8632.855: 0.6289% ( 1) 00:09:08.000 8948.691 - 9001.330: 0.6388% ( 1) 00:09:08.000 9001.330 - 9053.969: 0.6486% ( 1) 00:09:08.000 9053.969 - 9106.609: 0.6584% ( 1) 00:09:08.000 9106.609 - 9159.248: 0.7862% ( 13) 00:09:08.000 9159.248 - 9211.888: 0.9925% ( 21) 00:09:08.000 9211.888 - 9264.527: 1.2972% ( 31) 00:09:08.000 9264.527 - 9317.166: 1.8671% ( 58) 00:09:08.000 9317.166 - 9369.806: 2.7614% ( 91) 00:09:08.000 9369.806 - 9422.445: 3.7638% ( 102) 00:09:08.000 9422.445 - 9475.084: 5.0708% ( 133) 00:09:08.000 9475.084 - 9527.724: 6.2991% ( 125) 00:09:08.000 9527.724 - 9580.363: 7.8518% ( 158) 00:09:08.000 9580.363 - 9633.002: 9.4634% ( 164) 00:09:08.000 9633.002 - 9685.642: 11.1733% ( 174) 00:09:08.000 9685.642 - 9738.281: 13.1289% ( 199) 00:09:08.000 9738.281 - 9790.920: 15.2123% ( 212) 00:09:08.000 9790.920 - 9843.560: 17.6887% ( 252) 00:09:08.000 9843.560 - 9896.199: 19.7032% ( 205) 00:09:08.000 9896.199 - 9948.839: 21.7178% ( 205) 00:09:08.000 9948.839 - 10001.478: 23.7323% ( 205) 00:09:08.000 10001.478 - 10054.117: 25.5503% ( 185) 00:09:08.000 10054.117 - 10106.757: 27.2897% ( 177) 00:09:08.000 10106.757 - 10159.396: 29.5991% ( 235) 00:09:08.000 10159.396 - 10212.035: 31.1910% ( 162) 00:09:08.000 10212.035 - 10264.675: 32.4587% ( 129) 00:09:08.000 10264.675 - 10317.314: 34.0114% ( 158) 00:09:08.000 10317.314 - 10369.953: 35.1513% ( 116) 00:09:08.000 10369.953 - 10422.593: 36.2225% ( 109) 00:09:08.000 10422.593 - 10475.232: 37.4312% ( 123) 00:09:08.000 10475.232 - 10527.871: 38.4139% ( 100) 00:09:08.000 10527.871 - 10580.511: 39.3180% ( 92) 00:09:08.000 10580.511 - 10633.150: 40.1926% ( 89) 00:09:08.000 10633.150 - 10685.790: 41.0476% ( 87) 00:09:08.000 10685.790 - 10738.429: 41.7944% ( 76) 00:09:08.000 10738.429 - 10791.068: 42.6002% ( 82) 00:09:08.000 10791.068 - 10843.708: 43.4650% ( 88) 00:09:08.000 10843.708 - 10896.347: 44.3396% ( 89) 00:09:08.000 10896.347 - 10948.986: 45.0963% ( 77) 00:09:08.000 10948.986 - 11001.626: 45.8039% ( 72) 00:09:08.000 11001.626 - 11054.265: 46.8553% ( 107) 00:09:08.000 11054.265 - 11106.904: 47.7201% ( 88) 00:09:08.000 11106.904 - 11159.544: 48.5063% ( 80) 00:09:08.000 11159.544 - 11212.183: 49.7642% ( 128) 00:09:08.000 11212.183 - 11264.822: 50.8844% ( 114) 00:09:08.000 11264.822 - 11317.462: 51.7296% ( 86) 00:09:08.000 11317.462 - 11370.101: 52.4568% ( 74) 00:09:08.000 11370.101 - 11422.741: 53.1545% ( 71) 00:09:08.000 11422.741 - 11475.380: 53.7244% ( 58) 00:09:08.000 11475.380 - 11528.019: 54.3141% ( 60) 00:09:08.000 11528.019 - 11580.659: 54.8054% ( 50) 00:09:08.000 11580.659 - 11633.298: 55.2378% ( 44) 00:09:08.000 11633.298 - 11685.937: 55.6899% ( 46) 00:09:08.000 11685.937 - 11738.577: 56.0436% ( 36) 00:09:08.000 11738.577 - 11791.216: 56.3384% ( 30) 00:09:08.000 11791.216 - 11843.855: 56.6627% ( 33) 00:09:08.000 11843.855 - 11896.495: 56.9379% ( 28) 00:09:08.000 11896.495 - 11949.134: 57.2425% ( 31) 00:09:08.000 11949.134 - 12001.773: 57.5767% ( 34) 00:09:08.000 12001.773 - 12054.413: 57.8616% ( 29) 00:09:08.000 12054.413 - 12107.052: 58.1466% ( 29) 00:09:08.000 12107.052 - 12159.692: 58.4513% ( 31) 00:09:08.000 12159.692 - 12212.331: 58.8443% ( 40) 00:09:08.000 12212.331 - 12264.970: 59.1981% ( 36) 00:09:08.000 12264.970 - 12317.610: 59.7288% ( 54) 00:09:08.000 12317.610 - 12370.249: 60.2300% ( 51) 00:09:08.000 12370.249 - 12422.888: 60.5444% ( 32) 00:09:08.000 12422.888 - 12475.528: 60.8294% ( 29) 00:09:08.000 12475.528 - 12528.167: 61.0849% ( 26) 00:09:08.000 12528.167 - 12580.806: 61.3306% ( 25) 00:09:08.000 12580.806 - 12633.446: 61.6057% ( 28) 00:09:08.000 12633.446 - 12686.085: 61.9693% ( 37) 00:09:08.000 12686.085 - 12738.724: 62.4312% ( 47) 00:09:08.000 12738.724 - 12791.364: 62.9029% ( 48) 00:09:08.000 12791.364 - 12844.003: 63.4925% ( 60) 00:09:08.000 12844.003 - 12896.643: 64.1509% ( 67) 00:09:08.000 12896.643 - 12949.282: 64.7504% ( 61) 00:09:08.000 12949.282 - 13001.921: 65.0845% ( 34) 00:09:08.000 13001.921 - 13054.561: 65.4383% ( 36) 00:09:08.000 13054.561 - 13107.200: 65.8412% ( 41) 00:09:08.000 13107.200 - 13159.839: 66.2834% ( 45) 00:09:08.000 13159.839 - 13212.479: 66.7649% ( 49) 00:09:08.000 13212.479 - 13265.118: 67.2268% ( 47) 00:09:08.000 13265.118 - 13317.757: 67.7182% ( 50) 00:09:08.000 13317.757 - 13370.397: 68.3373% ( 63) 00:09:08.000 13370.397 - 13423.036: 68.8581% ( 53) 00:09:08.000 13423.036 - 13475.676: 69.3593% ( 51) 00:09:08.000 13475.676 - 13580.954: 70.2142% ( 87) 00:09:08.001 13580.954 - 13686.233: 70.7449% ( 54) 00:09:08.001 13686.233 - 13791.512: 71.1380% ( 40) 00:09:08.001 13791.512 - 13896.790: 71.7472% ( 62) 00:09:08.001 13896.790 - 14002.069: 72.6317% ( 90) 00:09:08.001 14002.069 - 14107.348: 73.2901% ( 67) 00:09:08.001 14107.348 - 14212.627: 73.8306% ( 55) 00:09:08.001 14212.627 - 14317.905: 74.3514% ( 53) 00:09:08.001 14317.905 - 14423.184: 74.8133% ( 47) 00:09:08.001 14423.184 - 14528.463: 75.2457% ( 44) 00:09:08.001 14528.463 - 14633.741: 75.7469% ( 51) 00:09:08.001 14633.741 - 14739.020: 76.3856% ( 65) 00:09:08.001 14739.020 - 14844.299: 77.1914% ( 82) 00:09:08.001 14844.299 - 14949.578: 77.5157% ( 33) 00:09:08.001 14949.578 - 15054.856: 77.8105% ( 30) 00:09:08.001 15054.856 - 15160.135: 78.1643% ( 36) 00:09:08.001 15160.135 - 15265.414: 78.6950% ( 54) 00:09:08.001 15265.414 - 15370.692: 79.4320% ( 75) 00:09:08.001 15370.692 - 15475.971: 80.0020% ( 58) 00:09:08.001 15475.971 - 15581.250: 80.4147% ( 42) 00:09:08.001 15581.250 - 15686.529: 80.8569% ( 45) 00:09:08.001 15686.529 - 15791.807: 81.1910% ( 34) 00:09:08.001 15791.807 - 15897.086: 81.6529% ( 47) 00:09:08.001 15897.086 - 16002.365: 82.3015% ( 66) 00:09:08.001 16002.365 - 16107.643: 82.9304% ( 64) 00:09:08.001 16107.643 - 16212.922: 83.5397% ( 62) 00:09:08.001 16212.922 - 16318.201: 84.3652% ( 84) 00:09:08.001 16318.201 - 16423.480: 84.9744% ( 62) 00:09:08.001 16423.480 - 16528.758: 85.8097% ( 85) 00:09:08.001 16528.758 - 16634.037: 86.6745% ( 88) 00:09:08.001 16634.037 - 16739.316: 87.7653% ( 111) 00:09:08.001 16739.316 - 16844.594: 88.7087% ( 96) 00:09:08.001 16844.594 - 16949.873: 89.6226% ( 93) 00:09:08.001 16949.873 - 17055.152: 90.3498% ( 74) 00:09:08.001 17055.152 - 17160.431: 91.1851% ( 85) 00:09:08.001 17160.431 - 17265.709: 92.0303% ( 86) 00:09:08.001 17265.709 - 17370.988: 92.7378% ( 72) 00:09:08.001 17370.988 - 17476.267: 93.3471% ( 62) 00:09:08.001 17476.267 - 17581.545: 93.9367% ( 60) 00:09:08.001 17581.545 - 17686.824: 94.5067% ( 58) 00:09:08.001 17686.824 - 17792.103: 94.9686% ( 47) 00:09:08.001 17792.103 - 17897.382: 95.3616% ( 40) 00:09:08.001 17897.382 - 18002.660: 95.7547% ( 40) 00:09:08.001 18002.660 - 18107.939: 96.1183% ( 37) 00:09:08.001 18107.939 - 18213.218: 96.4426% ( 33) 00:09:08.001 18213.218 - 18318.496: 96.6883% ( 25) 00:09:08.001 18318.496 - 18423.775: 96.8455% ( 16) 00:09:08.001 18423.775 - 18529.054: 97.0126% ( 17) 00:09:08.001 18529.054 - 18634.333: 97.1502% ( 14) 00:09:08.001 18634.333 - 18739.611: 97.3565% ( 21) 00:09:08.001 18739.611 - 18844.890: 97.5629% ( 21) 00:09:08.001 18844.890 - 18950.169: 97.6906% ( 13) 00:09:08.001 18950.169 - 19055.447: 97.7594% ( 7) 00:09:08.001 19055.447 - 19160.726: 97.7889% ( 3) 00:09:08.001 19160.726 - 19266.005: 97.8184% ( 3) 00:09:08.001 19266.005 - 19371.284: 97.8970% ( 8) 00:09:08.001 19371.284 - 19476.562: 98.0248% ( 13) 00:09:08.001 19476.562 - 19581.841: 98.1132% ( 9) 00:09:08.001 19581.841 - 19687.120: 98.2213% ( 11) 00:09:08.001 19687.120 - 19792.398: 98.3196% ( 10) 00:09:08.001 19792.398 - 19897.677: 98.4178% ( 10) 00:09:08.001 19897.677 - 20002.956: 98.5161% ( 10) 00:09:08.001 20002.956 - 20108.235: 98.5751% ( 6) 00:09:08.001 20108.235 - 20213.513: 98.6340% ( 6) 00:09:08.001 20213.513 - 20318.792: 98.6832% ( 5) 00:09:08.001 20318.792 - 20424.071: 98.7421% ( 6) 00:09:08.001 27161.908 - 27372.466: 98.8011% ( 6) 00:09:08.001 27372.466 - 27583.023: 98.8797% ( 8) 00:09:08.001 27583.023 - 27793.581: 98.9485% ( 7) 00:09:08.001 27793.581 - 28004.138: 99.0271% ( 8) 00:09:08.001 28004.138 - 28214.696: 99.0959% ( 7) 00:09:08.001 28214.696 - 28425.253: 99.1647% ( 7) 00:09:08.001 28425.253 - 28635.810: 99.2433% ( 8) 00:09:08.001 28635.810 - 28846.368: 99.3121% ( 7) 00:09:08.001 28846.368 - 29056.925: 99.3711% ( 6) 00:09:08.001 36215.878 - 36426.435: 99.4300% ( 6) 00:09:08.001 36426.435 - 36636.993: 99.4988% ( 7) 00:09:08.001 36636.993 - 36847.550: 99.5676% ( 7) 00:09:08.001 36847.550 - 37058.108: 99.6364% ( 7) 00:09:08.001 37058.108 - 37268.665: 99.7052% ( 7) 00:09:08.001 37268.665 - 37479.222: 99.7740% ( 7) 00:09:08.001 37479.222 - 37689.780: 99.8526% ( 8) 00:09:08.001 37689.780 - 37900.337: 99.9312% ( 8) 00:09:08.001 37900.337 - 38110.895: 100.0000% ( 7) 00:09:08.001 00:09:08.001 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:09:08.001 ============================================================================== 00:09:08.001 Range in us Cumulative IO count 00:09:08.001 7422.149 - 7474.789: 0.0295% ( 3) 00:09:08.001 7474.789 - 7527.428: 0.0786% ( 5) 00:09:08.001 7527.428 - 7580.067: 0.1376% ( 6) 00:09:08.001 7580.067 - 7632.707: 0.2653% ( 13) 00:09:08.001 7632.707 - 7685.346: 0.3931% ( 13) 00:09:08.001 7685.346 - 7737.986: 0.4226% ( 3) 00:09:08.001 7737.986 - 7790.625: 0.4422% ( 2) 00:09:08.001 7790.625 - 7843.264: 0.4619% ( 2) 00:09:08.001 7843.264 - 7895.904: 0.4815% ( 2) 00:09:08.001 7895.904 - 7948.543: 0.5110% ( 3) 00:09:08.001 7948.543 - 8001.182: 0.5307% ( 2) 00:09:08.001 8001.182 - 8053.822: 0.5503% ( 2) 00:09:08.001 8053.822 - 8106.461: 0.5601% ( 1) 00:09:08.001 8106.461 - 8159.100: 0.5896% ( 3) 00:09:08.001 8159.100 - 8211.740: 0.6093% ( 2) 00:09:08.001 8211.740 - 8264.379: 0.6289% ( 2) 00:09:08.001 9001.330 - 9053.969: 0.6388% ( 1) 00:09:08.001 9053.969 - 9106.609: 0.6486% ( 1) 00:09:08.001 9106.609 - 9159.248: 0.6879% ( 4) 00:09:08.001 9159.248 - 9211.888: 0.8648% ( 18) 00:09:08.001 9211.888 - 9264.527: 1.2087% ( 35) 00:09:08.001 9264.527 - 9317.166: 1.7099% ( 51) 00:09:08.001 9317.166 - 9369.806: 2.4175% ( 72) 00:09:08.001 9369.806 - 9422.445: 3.3019% ( 90) 00:09:08.001 9422.445 - 9475.084: 4.5401% ( 126) 00:09:08.001 9475.084 - 9527.724: 5.8864% ( 137) 00:09:08.001 9527.724 - 9580.363: 7.1836% ( 132) 00:09:08.001 9580.363 - 9633.002: 8.7952% ( 164) 00:09:08.001 9633.002 - 9685.642: 10.8294% ( 207) 00:09:08.001 9685.642 - 9738.281: 13.1977% ( 241) 00:09:08.001 9738.281 - 9790.920: 15.8805% ( 273) 00:09:08.001 9790.920 - 9843.560: 18.5240% ( 269) 00:09:08.001 9843.560 - 9896.199: 20.8039% ( 232) 00:09:08.001 9896.199 - 9948.839: 22.8086% ( 204) 00:09:08.001 9948.839 - 10001.478: 24.7642% ( 199) 00:09:08.001 10001.478 - 10054.117: 26.6411% ( 191) 00:09:08.001 10054.117 - 10106.757: 28.1643% ( 155) 00:09:08.001 10106.757 - 10159.396: 29.3534% ( 121) 00:09:08.001 10159.396 - 10212.035: 30.8962% ( 157) 00:09:08.001 10212.035 - 10264.675: 32.2622% ( 139) 00:09:08.001 10264.675 - 10317.314: 33.2744% ( 103) 00:09:08.001 10317.314 - 10369.953: 34.4045% ( 115) 00:09:08.002 10369.953 - 10422.593: 35.9768% ( 160) 00:09:08.002 10422.593 - 10475.232: 37.2838% ( 133) 00:09:08.002 10475.232 - 10527.871: 38.2960% ( 103) 00:09:08.002 10527.871 - 10580.511: 39.4949% ( 122) 00:09:08.002 10580.511 - 10633.150: 40.4481% ( 97) 00:09:08.002 10633.150 - 10685.790: 41.6961% ( 127) 00:09:08.002 10685.790 - 10738.429: 42.3939% ( 71) 00:09:08.002 10738.429 - 10791.068: 43.2488% ( 87) 00:09:08.002 10791.068 - 10843.708: 43.8876% ( 65) 00:09:08.002 10843.708 - 10896.347: 44.6737% ( 80) 00:09:08.002 10896.347 - 10948.986: 45.7940% ( 114) 00:09:08.002 10948.986 - 11001.626: 46.9438% ( 117) 00:09:08.002 11001.626 - 11054.265: 47.9363% ( 101) 00:09:08.002 11054.265 - 11106.904: 48.6635% ( 74) 00:09:08.002 11106.904 - 11159.544: 49.2925% ( 64) 00:09:08.002 11159.544 - 11212.183: 49.9705% ( 69) 00:09:08.002 11212.183 - 11264.822: 50.4619% ( 50) 00:09:08.002 11264.822 - 11317.462: 51.0122% ( 56) 00:09:08.002 11317.462 - 11370.101: 51.4053% ( 40) 00:09:08.002 11370.101 - 11422.741: 51.8475% ( 45) 00:09:08.002 11422.741 - 11475.380: 52.2700% ( 43) 00:09:08.002 11475.380 - 11528.019: 52.7221% ( 46) 00:09:08.002 11528.019 - 11580.659: 53.1250% ( 41) 00:09:08.002 11580.659 - 11633.298: 53.6065% ( 49) 00:09:08.002 11633.298 - 11685.937: 53.8817% ( 28) 00:09:08.002 11685.937 - 11738.577: 54.2256% ( 35) 00:09:08.002 11738.577 - 11791.216: 54.5597% ( 34) 00:09:08.002 11791.216 - 11843.855: 54.9528% ( 40) 00:09:08.002 11843.855 - 11896.495: 55.6112% ( 67) 00:09:08.002 11896.495 - 11949.134: 56.0928% ( 49) 00:09:08.002 11949.134 - 12001.773: 56.5743% ( 49) 00:09:08.002 12001.773 - 12054.413: 57.0755% ( 51) 00:09:08.002 12054.413 - 12107.052: 57.6356% ( 57) 00:09:08.002 12107.052 - 12159.692: 58.1270% ( 50) 00:09:08.002 12159.692 - 12212.331: 58.6281% ( 51) 00:09:08.002 12212.331 - 12264.970: 59.0704% ( 45) 00:09:08.002 12264.970 - 12317.610: 59.4438% ( 38) 00:09:08.002 12317.610 - 12370.249: 59.8467% ( 41) 00:09:08.002 12370.249 - 12422.888: 60.4756% ( 64) 00:09:08.002 12422.888 - 12475.528: 60.9178% ( 45) 00:09:08.002 12475.528 - 12528.167: 61.3895% ( 48) 00:09:08.002 12528.167 - 12580.806: 61.8023% ( 42) 00:09:08.002 12580.806 - 12633.446: 62.2936% ( 50) 00:09:08.002 12633.446 - 12686.085: 62.7653% ( 48) 00:09:08.002 12686.085 - 12738.724: 63.2469% ( 49) 00:09:08.002 12738.724 - 12791.364: 63.6792% ( 44) 00:09:08.002 12791.364 - 12844.003: 63.9544% ( 28) 00:09:08.002 12844.003 - 12896.643: 64.2492% ( 30) 00:09:08.002 12896.643 - 12949.282: 64.5047% ( 26) 00:09:08.002 12949.282 - 13001.921: 64.8683% ( 37) 00:09:08.002 13001.921 - 13054.561: 65.0747% ( 21) 00:09:08.002 13054.561 - 13107.200: 65.3695% ( 30) 00:09:08.002 13107.200 - 13159.839: 65.6840% ( 32) 00:09:08.002 13159.839 - 13212.479: 66.0869% ( 41) 00:09:08.002 13212.479 - 13265.118: 66.5586% ( 48) 00:09:08.002 13265.118 - 13317.757: 67.0008% ( 45) 00:09:08.002 13317.757 - 13370.397: 67.4430% ( 45) 00:09:08.002 13370.397 - 13423.036: 67.7968% ( 36) 00:09:08.002 13423.036 - 13475.676: 68.1702% ( 38) 00:09:08.002 13475.676 - 13580.954: 69.0448% ( 89) 00:09:08.002 13580.954 - 13686.233: 70.0275% ( 100) 00:09:08.002 13686.233 - 13791.512: 71.0888% ( 108) 00:09:08.002 13791.512 - 13896.790: 71.9438% ( 87) 00:09:08.002 13896.790 - 14002.069: 72.5531% ( 62) 00:09:08.002 14002.069 - 14107.348: 73.3196% ( 78) 00:09:08.002 14107.348 - 14212.627: 74.0763% ( 77) 00:09:08.002 14212.627 - 14317.905: 75.0197% ( 96) 00:09:08.002 14317.905 - 14423.184: 75.8255% ( 82) 00:09:08.002 14423.184 - 14528.463: 76.4446% ( 63) 00:09:08.002 14528.463 - 14633.741: 76.9359% ( 50) 00:09:08.002 14633.741 - 14739.020: 77.8204% ( 90) 00:09:08.002 14739.020 - 14844.299: 78.3215% ( 51) 00:09:08.002 14844.299 - 14949.578: 78.6360% ( 32) 00:09:08.002 14949.578 - 15054.856: 78.9112% ( 28) 00:09:08.002 15054.856 - 15160.135: 79.2060% ( 30) 00:09:08.002 15160.135 - 15265.414: 79.5499% ( 35) 00:09:08.002 15265.414 - 15370.692: 79.8546% ( 31) 00:09:08.002 15370.692 - 15475.971: 80.1494% ( 30) 00:09:08.002 15475.971 - 15581.250: 80.4245% ( 28) 00:09:08.002 15581.250 - 15686.529: 80.8766% ( 46) 00:09:08.002 15686.529 - 15791.807: 81.5939% ( 73) 00:09:08.002 15791.807 - 15897.086: 82.3310% ( 75) 00:09:08.002 15897.086 - 16002.365: 82.9304% ( 61) 00:09:08.002 16002.365 - 16107.643: 83.3825% ( 46) 00:09:08.002 16107.643 - 16212.922: 83.8345% ( 46) 00:09:08.002 16212.922 - 16318.201: 84.3062% ( 48) 00:09:08.002 16318.201 - 16423.480: 84.8369% ( 54) 00:09:08.002 16423.480 - 16528.758: 85.3872% ( 56) 00:09:08.002 16528.758 - 16634.037: 86.1537% ( 78) 00:09:08.002 16634.037 - 16739.316: 87.0480% ( 91) 00:09:08.002 16739.316 - 16844.594: 88.2370% ( 121) 00:09:08.002 16844.594 - 16949.873: 89.6128% ( 140) 00:09:08.002 16949.873 - 17055.152: 90.6545% ( 106) 00:09:08.002 17055.152 - 17160.431: 91.4112% ( 77) 00:09:08.002 17160.431 - 17265.709: 92.1678% ( 77) 00:09:08.002 17265.709 - 17370.988: 92.8164% ( 66) 00:09:08.002 17370.988 - 17476.267: 93.3471% ( 54) 00:09:08.002 17476.267 - 17581.545: 93.9269% ( 59) 00:09:08.002 17581.545 - 17686.824: 94.6246% ( 71) 00:09:08.002 17686.824 - 17792.103: 95.2142% ( 60) 00:09:08.002 17792.103 - 17897.382: 95.6368% ( 43) 00:09:08.002 17897.382 - 18002.660: 95.9119% ( 28) 00:09:08.002 18002.660 - 18107.939: 96.2068% ( 30) 00:09:08.002 18107.939 - 18213.218: 96.4917% ( 29) 00:09:08.002 18213.218 - 18318.496: 96.6981% ( 21) 00:09:08.002 18318.496 - 18423.775: 96.8259% ( 13) 00:09:08.002 18423.775 - 18529.054: 96.9831% ( 16) 00:09:08.002 18529.054 - 18634.333: 97.1502% ( 17) 00:09:08.002 18634.333 - 18739.611: 97.2779% ( 13) 00:09:08.002 18739.611 - 18844.890: 97.6022% ( 33) 00:09:08.002 18844.890 - 18950.169: 97.9265% ( 33) 00:09:08.002 18950.169 - 19055.447: 98.1329% ( 21) 00:09:08.002 19055.447 - 19160.726: 98.2213% ( 9) 00:09:08.002 19160.726 - 19266.005: 98.3196% ( 10) 00:09:08.002 19266.005 - 19371.284: 98.4178% ( 10) 00:09:08.002 19371.284 - 19476.562: 98.5063% ( 9) 00:09:08.002 19476.562 - 19581.841: 98.5849% ( 8) 00:09:08.002 19581.841 - 19687.120: 98.6340% ( 5) 00:09:08.002 19687.120 - 19792.398: 98.6930% ( 6) 00:09:08.002 19792.398 - 19897.677: 98.7421% ( 5) 00:09:08.002 27161.908 - 27372.466: 98.8208% ( 8) 00:09:08.002 27372.466 - 27583.023: 98.8895% ( 7) 00:09:08.002 27583.023 - 27793.581: 98.9583% ( 7) 00:09:08.002 27793.581 - 28004.138: 99.0271% ( 7) 00:09:08.002 28004.138 - 28214.696: 99.0959% ( 7) 00:09:08.002 28214.696 - 28425.253: 99.1549% ( 6) 00:09:08.002 28425.253 - 28635.810: 99.2335% ( 8) 00:09:08.002 28635.810 - 28846.368: 99.3121% ( 8) 00:09:08.002 28846.368 - 29056.925: 99.3711% ( 6) 00:09:08.002 36005.320 - 36215.878: 99.3907% ( 2) 00:09:08.002 36215.878 - 36426.435: 99.4497% ( 6) 00:09:08.002 36426.435 - 36636.993: 99.5283% ( 8) 00:09:08.002 36636.993 - 36847.550: 99.5971% ( 7) 00:09:08.002 36847.550 - 37058.108: 99.6462% ( 5) 00:09:08.002 37058.108 - 37268.665: 99.7248% ( 8) 00:09:08.002 37268.665 - 37479.222: 99.7936% ( 7) 00:09:08.002 37479.222 - 37689.780: 99.8624% ( 7) 00:09:08.002 37689.780 - 37900.337: 99.9312% ( 7) 00:09:08.002 37900.337 - 38110.895: 100.0000% ( 7) 00:09:08.002 00:09:08.002 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:09:08.002 ============================================================================== 00:09:08.002 Range in us Cumulative IO count 00:09:08.002 6790.477 - 6843.116: 0.0098% ( 1) 00:09:08.002 6895.756 - 6948.395: 0.0195% ( 1) 00:09:08.002 7053.674 - 7106.313: 0.0391% ( 2) 00:09:08.002 7106.313 - 7158.953: 0.0977% ( 6) 00:09:08.002 7158.953 - 7211.592: 0.1465% ( 5) 00:09:08.002 7211.592 - 7264.231: 0.2539% ( 11) 00:09:08.002 7264.231 - 7316.871: 0.4004% ( 15) 00:09:08.002 7316.871 - 7369.510: 0.4395% ( 4) 00:09:08.002 7369.510 - 7422.149: 0.4492% ( 1) 00:09:08.002 7422.149 - 7474.789: 0.4688% ( 2) 00:09:08.002 7474.789 - 7527.428: 0.4883% ( 2) 00:09:08.002 7527.428 - 7580.067: 0.5176% ( 3) 00:09:08.002 7580.067 - 7632.707: 0.5371% ( 2) 00:09:08.002 7632.707 - 7685.346: 0.5566% ( 2) 00:09:08.002 7685.346 - 7737.986: 0.5762% ( 2) 00:09:08.002 7737.986 - 7790.625: 0.5957% ( 2) 00:09:08.002 7790.625 - 7843.264: 0.6250% ( 3) 00:09:08.002 8790.773 - 8843.412: 0.6348% ( 1) 00:09:08.002 8896.051 - 8948.691: 0.6445% ( 1) 00:09:08.002 8948.691 - 9001.330: 0.6641% ( 2) 00:09:08.002 9053.969 - 9106.609: 0.7324% ( 7) 00:09:08.002 9106.609 - 9159.248: 0.9082% ( 18) 00:09:08.002 9159.248 - 9211.888: 1.1816% ( 28) 00:09:08.002 9211.888 - 9264.527: 1.6113% ( 44) 00:09:08.002 9264.527 - 9317.166: 2.2168% ( 62) 00:09:08.002 9317.166 - 9369.806: 2.8711% ( 67) 00:09:08.002 9369.806 - 9422.445: 3.5645% ( 71) 00:09:08.002 9422.445 - 9475.084: 4.4727% ( 93) 00:09:08.002 9475.084 - 9527.724: 5.7129% ( 127) 00:09:08.002 9527.724 - 9580.363: 7.2070% ( 153) 00:09:08.002 9580.363 - 9633.002: 8.8281% ( 166) 00:09:08.002 9633.002 - 9685.642: 10.8691% ( 209) 00:09:08.002 9685.642 - 9738.281: 12.7539% ( 193) 00:09:08.002 9738.281 - 9790.920: 15.0684% ( 237) 00:09:08.002 9790.920 - 9843.560: 17.7441% ( 274) 00:09:08.002 9843.560 - 9896.199: 20.4688% ( 279) 00:09:08.002 9896.199 - 9948.839: 22.9785% ( 257) 00:09:08.002 9948.839 - 10001.478: 24.9121% ( 198) 00:09:08.002 10001.478 - 10054.117: 26.9336% ( 207) 00:09:08.002 10054.117 - 10106.757: 28.8379% ( 195) 00:09:08.002 10106.757 - 10159.396: 30.6738% ( 188) 00:09:08.002 10159.396 - 10212.035: 32.3730% ( 174) 00:09:08.003 10212.035 - 10264.675: 33.8281% ( 149) 00:09:08.003 10264.675 - 10317.314: 35.0977% ( 130) 00:09:08.003 10317.314 - 10369.953: 36.4648% ( 140) 00:09:08.003 10369.953 - 10422.593: 37.7246% ( 129) 00:09:08.003 10422.593 - 10475.232: 38.8184% ( 112) 00:09:08.003 10475.232 - 10527.871: 39.7070% ( 91) 00:09:08.003 10527.871 - 10580.511: 40.7617% ( 108) 00:09:08.003 10580.511 - 10633.150: 41.8066% ( 107) 00:09:08.003 10633.150 - 10685.790: 42.5977% ( 81) 00:09:08.003 10685.790 - 10738.429: 43.4766% ( 90) 00:09:08.003 10738.429 - 10791.068: 44.5020% ( 105) 00:09:08.003 10791.068 - 10843.708: 45.5078% ( 103) 00:09:08.003 10843.708 - 10896.347: 46.2012% ( 71) 00:09:08.003 10896.347 - 10948.986: 47.0703% ( 89) 00:09:08.003 10948.986 - 11001.626: 47.8711% ( 82) 00:09:08.003 11001.626 - 11054.265: 48.7695% ( 92) 00:09:08.003 11054.265 - 11106.904: 49.4531% ( 70) 00:09:08.003 11106.904 - 11159.544: 49.8730% ( 43) 00:09:08.003 11159.544 - 11212.183: 50.4590% ( 60) 00:09:08.003 11212.183 - 11264.822: 50.9766% ( 53) 00:09:08.003 11264.822 - 11317.462: 51.6602% ( 70) 00:09:08.003 11317.462 - 11370.101: 51.9824% ( 33) 00:09:08.003 11370.101 - 11422.741: 52.3828% ( 41) 00:09:08.003 11422.741 - 11475.380: 52.6367% ( 26) 00:09:08.003 11475.380 - 11528.019: 53.0957% ( 47) 00:09:08.003 11528.019 - 11580.659: 53.6230% ( 54) 00:09:08.003 11580.659 - 11633.298: 53.8574% ( 24) 00:09:08.003 11633.298 - 11685.937: 54.2090% ( 36) 00:09:08.003 11685.937 - 11738.577: 54.4531% ( 25) 00:09:08.003 11738.577 - 11791.216: 54.6387% ( 19) 00:09:08.003 11791.216 - 11843.855: 54.9023% ( 27) 00:09:08.003 11843.855 - 11896.495: 55.3223% ( 43) 00:09:08.003 11896.495 - 11949.134: 55.6055% ( 29) 00:09:08.003 11949.134 - 12001.773: 55.8789% ( 28) 00:09:08.003 12001.773 - 12054.413: 56.1914% ( 32) 00:09:08.003 12054.413 - 12107.052: 56.3281% ( 14) 00:09:08.003 12107.052 - 12159.692: 56.4844% ( 16) 00:09:08.003 12159.692 - 12212.331: 56.6895% ( 21) 00:09:08.003 12212.331 - 12264.970: 56.9727% ( 29) 00:09:08.003 12264.970 - 12317.610: 57.6074% ( 65) 00:09:08.003 12317.610 - 12370.249: 58.2324% ( 64) 00:09:08.003 12370.249 - 12422.888: 58.7207% ( 50) 00:09:08.003 12422.888 - 12475.528: 59.3945% ( 69) 00:09:08.003 12475.528 - 12528.167: 59.9609% ( 58) 00:09:08.003 12528.167 - 12580.806: 60.4297% ( 48) 00:09:08.003 12580.806 - 12633.446: 60.7715% ( 35) 00:09:08.003 12633.446 - 12686.085: 61.3086% ( 55) 00:09:08.003 12686.085 - 12738.724: 61.9336% ( 64) 00:09:08.003 12738.724 - 12791.364: 62.6660% ( 75) 00:09:08.003 12791.364 - 12844.003: 63.4082% ( 76) 00:09:08.003 12844.003 - 12896.643: 64.1016% ( 71) 00:09:08.003 12896.643 - 12949.282: 64.6094% ( 52) 00:09:08.003 12949.282 - 13001.921: 65.0684% ( 47) 00:09:08.003 13001.921 - 13054.561: 65.4883% ( 43) 00:09:08.003 13054.561 - 13107.200: 65.7520% ( 27) 00:09:08.003 13107.200 - 13159.839: 66.0059% ( 26) 00:09:08.003 13159.839 - 13212.479: 66.2500% ( 25) 00:09:08.003 13212.479 - 13265.118: 66.5820% ( 34) 00:09:08.003 13265.118 - 13317.757: 66.8750% ( 30) 00:09:08.003 13317.757 - 13370.397: 67.1484% ( 28) 00:09:08.003 13370.397 - 13423.036: 67.5000% ( 36) 00:09:08.003 13423.036 - 13475.676: 67.7441% ( 25) 00:09:08.003 13475.676 - 13580.954: 68.1934% ( 46) 00:09:08.003 13580.954 - 13686.233: 68.8184% ( 64) 00:09:08.003 13686.233 - 13791.512: 69.5898% ( 79) 00:09:08.003 13791.512 - 13896.790: 70.4004% ( 83) 00:09:08.003 13896.790 - 14002.069: 71.5820% ( 121) 00:09:08.003 14002.069 - 14107.348: 72.4609% ( 90) 00:09:08.003 14107.348 - 14212.627: 73.3789% ( 94) 00:09:08.003 14212.627 - 14317.905: 74.2383% ( 88) 00:09:08.003 14317.905 - 14423.184: 74.9805% ( 76) 00:09:08.003 14423.184 - 14528.463: 75.8398% ( 88) 00:09:08.003 14528.463 - 14633.741: 76.7285% ( 91) 00:09:08.003 14633.741 - 14739.020: 77.2559% ( 54) 00:09:08.003 14739.020 - 14844.299: 78.0078% ( 77) 00:09:08.003 14844.299 - 14949.578: 78.5352% ( 54) 00:09:08.003 14949.578 - 15054.856: 79.1309% ( 61) 00:09:08.003 15054.856 - 15160.135: 79.7070% ( 59) 00:09:08.003 15160.135 - 15265.414: 80.0879% ( 39) 00:09:08.003 15265.414 - 15370.692: 80.4199% ( 34) 00:09:08.003 15370.692 - 15475.971: 80.7324% ( 32) 00:09:08.003 15475.971 - 15581.250: 81.0645% ( 34) 00:09:08.003 15581.250 - 15686.529: 81.4062% ( 35) 00:09:08.003 15686.529 - 15791.807: 81.5723% ( 17) 00:09:08.003 15791.807 - 15897.086: 81.8359% ( 27) 00:09:08.003 15897.086 - 16002.365: 82.2363% ( 41) 00:09:08.003 16002.365 - 16107.643: 82.6953% ( 47) 00:09:08.003 16107.643 - 16212.922: 83.3887% ( 71) 00:09:08.003 16212.922 - 16318.201: 84.1016% ( 73) 00:09:08.003 16318.201 - 16423.480: 84.7363% ( 65) 00:09:08.003 16423.480 - 16528.758: 85.4004% ( 68) 00:09:08.003 16528.758 - 16634.037: 86.0645% ( 68) 00:09:08.003 16634.037 - 16739.316: 87.0020% ( 96) 00:09:08.003 16739.316 - 16844.594: 88.1445% ( 117) 00:09:08.003 16844.594 - 16949.873: 88.8965% ( 77) 00:09:08.003 16949.873 - 17055.152: 89.4629% ( 58) 00:09:08.003 17055.152 - 17160.431: 90.0488% ( 60) 00:09:08.003 17160.431 - 17265.709: 90.7520% ( 72) 00:09:08.003 17265.709 - 17370.988: 91.4062% ( 67) 00:09:08.003 17370.988 - 17476.267: 92.2168% ( 83) 00:09:08.003 17476.267 - 17581.545: 93.1738% ( 98) 00:09:08.003 17581.545 - 17686.824: 93.8672% ( 71) 00:09:08.003 17686.824 - 17792.103: 94.4727% ( 62) 00:09:08.003 17792.103 - 17897.382: 94.9414% ( 48) 00:09:08.003 17897.382 - 18002.660: 95.4004% ( 47) 00:09:08.003 18002.660 - 18107.939: 95.7812% ( 39) 00:09:08.003 18107.939 - 18213.218: 96.0645% ( 29) 00:09:08.003 18213.218 - 18318.496: 96.3477% ( 29) 00:09:08.003 18318.496 - 18423.775: 96.7773% ( 44) 00:09:08.003 18423.775 - 18529.054: 97.2754% ( 51) 00:09:08.003 18529.054 - 18634.333: 97.4512% ( 18) 00:09:08.003 18634.333 - 18739.611: 97.6172% ( 17) 00:09:08.003 18739.611 - 18844.890: 97.7344% ( 12) 00:09:08.003 18844.890 - 18950.169: 97.8320% ( 10) 00:09:08.003 18950.169 - 19055.447: 97.9102% ( 8) 00:09:08.003 19055.447 - 19160.726: 97.9590% ( 5) 00:09:08.003 19160.726 - 19266.005: 98.0469% ( 9) 00:09:08.003 19266.005 - 19371.284: 98.1836% ( 14) 00:09:08.003 19371.284 - 19476.562: 98.3496% ( 17) 00:09:08.003 19476.562 - 19581.841: 98.6133% ( 27) 00:09:08.003 19581.841 - 19687.120: 98.7988% ( 19) 00:09:08.003 19687.120 - 19792.398: 98.9355% ( 14) 00:09:08.003 19792.398 - 19897.677: 99.0137% ( 8) 00:09:08.003 19897.677 - 20002.956: 99.0723% ( 6) 00:09:08.003 20002.956 - 20108.235: 99.1406% ( 7) 00:09:08.003 20108.235 - 20213.513: 99.2090% ( 7) 00:09:08.003 20213.513 - 20318.792: 99.2773% ( 7) 00:09:08.003 20318.792 - 20424.071: 99.3457% ( 7) 00:09:08.003 20424.071 - 20529.349: 99.3750% ( 3) 00:09:08.003 27793.581 - 28004.138: 99.4043% ( 3) 00:09:08.003 28004.138 - 28214.696: 99.4727% ( 7) 00:09:08.003 28214.696 - 28425.253: 99.5508% ( 8) 00:09:08.003 28425.253 - 28635.810: 99.6191% ( 7) 00:09:08.003 28635.810 - 28846.368: 99.6875% ( 7) 00:09:08.003 28846.368 - 29056.925: 99.7656% ( 8) 00:09:08.003 29056.925 - 29267.483: 99.8438% ( 8) 00:09:08.003 29267.483 - 29478.040: 99.9219% ( 8) 00:09:08.003 29478.040 - 29688.598: 100.0000% ( 8) 00:09:08.003 00:09:08.003 ************************************ 00:09:08.003 END TEST nvme_perf 00:09:08.003 ************************************ 00:09:08.003 00:13:22 nvme.nvme_perf -- nvme/nvme.sh@24 -- # '[' -b /dev/ram0 ']' 00:09:08.003 00:09:08.003 real 0m2.559s 00:09:08.003 user 0m2.194s 00:09:08.003 sys 0m0.248s 00:09:08.003 00:13:22 nvme.nvme_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:08.003 00:13:22 nvme.nvme_perf -- common/autotest_common.sh@10 -- # set +x 00:09:08.003 00:13:22 nvme -- nvme/nvme.sh@87 -- # run_test nvme_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:09:08.003 00:13:22 nvme -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:09:08.003 00:13:22 nvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:08.003 00:13:22 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:08.003 ************************************ 00:09:08.003 START TEST nvme_hello_world 00:09:08.003 ************************************ 00:09:08.003 00:13:22 nvme.nvme_hello_world -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:09:08.003 Initializing NVMe Controllers 00:09:08.003 Attached to 0000:00:10.0 00:09:08.003 Namespace ID: 1 size: 6GB 00:09:08.004 Attached to 0000:00:11.0 00:09:08.004 Namespace ID: 1 size: 5GB 00:09:08.004 Attached to 0000:00:13.0 00:09:08.004 Namespace ID: 1 size: 1GB 00:09:08.004 Attached to 0000:00:12.0 00:09:08.004 Namespace ID: 1 size: 4GB 00:09:08.004 Namespace ID: 2 size: 4GB 00:09:08.004 Namespace ID: 3 size: 4GB 00:09:08.004 Initialization complete. 00:09:08.004 INFO: using host memory buffer for IO 00:09:08.004 Hello world! 00:09:08.004 INFO: using host memory buffer for IO 00:09:08.004 Hello world! 00:09:08.004 INFO: using host memory buffer for IO 00:09:08.004 Hello world! 00:09:08.004 INFO: using host memory buffer for IO 00:09:08.004 Hello world! 00:09:08.004 INFO: using host memory buffer for IO 00:09:08.004 Hello world! 00:09:08.004 INFO: using host memory buffer for IO 00:09:08.004 Hello world! 00:09:08.004 00:09:08.004 real 0m0.255s 00:09:08.004 user 0m0.092s 00:09:08.004 sys 0m0.115s 00:09:08.004 00:13:22 nvme.nvme_hello_world -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:08.004 00:13:22 nvme.nvme_hello_world -- common/autotest_common.sh@10 -- # set +x 00:09:08.004 ************************************ 00:09:08.004 END TEST nvme_hello_world 00:09:08.004 ************************************ 00:09:08.263 00:13:22 nvme -- nvme/nvme.sh@88 -- # run_test nvme_sgl /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:09:08.263 00:13:22 nvme -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:09:08.263 00:13:22 nvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:08.263 00:13:22 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:08.263 ************************************ 00:09:08.263 START TEST nvme_sgl 00:09:08.263 ************************************ 00:09:08.263 00:13:22 nvme.nvme_sgl -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:09:08.263 0000:00:10.0: build_io_request_0 Invalid IO length parameter 00:09:08.263 0000:00:10.0: build_io_request_1 Invalid IO length parameter 00:09:08.263 0000:00:10.0: build_io_request_3 Invalid IO length parameter 00:09:08.263 0000:00:10.0: build_io_request_8 Invalid IO length parameter 00:09:08.263 0000:00:10.0: build_io_request_9 Invalid IO length parameter 00:09:08.263 0000:00:10.0: build_io_request_11 Invalid IO length parameter 00:09:08.263 0000:00:11.0: build_io_request_0 Invalid IO length parameter 00:09:08.263 0000:00:11.0: build_io_request_1 Invalid IO length parameter 00:09:08.263 0000:00:11.0: build_io_request_3 Invalid IO length parameter 00:09:08.523 0000:00:11.0: build_io_request_8 Invalid IO length parameter 00:09:08.523 0000:00:11.0: build_io_request_9 Invalid IO length parameter 00:09:08.523 0000:00:11.0: build_io_request_11 Invalid IO length parameter 00:09:08.523 0000:00:13.0: build_io_request_0 Invalid IO length parameter 00:09:08.523 0000:00:13.0: build_io_request_1 Invalid IO length parameter 00:09:08.523 0000:00:13.0: build_io_request_2 Invalid IO length parameter 00:09:08.523 0000:00:13.0: build_io_request_3 Invalid IO length parameter 00:09:08.523 0000:00:13.0: build_io_request_4 Invalid IO length parameter 00:09:08.523 0000:00:13.0: build_io_request_5 Invalid IO length parameter 00:09:08.523 0000:00:13.0: build_io_request_6 Invalid IO length parameter 00:09:08.523 0000:00:13.0: build_io_request_7 Invalid IO length parameter 00:09:08.523 0000:00:13.0: build_io_request_8 Invalid IO length parameter 00:09:08.523 0000:00:13.0: build_io_request_9 Invalid IO length parameter 00:09:08.523 0000:00:13.0: build_io_request_10 Invalid IO length parameter 00:09:08.523 0000:00:13.0: build_io_request_11 Invalid IO length parameter 00:09:08.523 0000:00:12.0: build_io_request_0 Invalid IO length parameter 00:09:08.523 0000:00:12.0: build_io_request_1 Invalid IO length parameter 00:09:08.523 0000:00:12.0: build_io_request_2 Invalid IO length parameter 00:09:08.523 0000:00:12.0: build_io_request_3 Invalid IO length parameter 00:09:08.523 0000:00:12.0: build_io_request_4 Invalid IO length parameter 00:09:08.523 0000:00:12.0: build_io_request_5 Invalid IO length parameter 00:09:08.523 0000:00:12.0: build_io_request_6 Invalid IO length parameter 00:09:08.523 0000:00:12.0: build_io_request_7 Invalid IO length parameter 00:09:08.523 0000:00:12.0: build_io_request_8 Invalid IO length parameter 00:09:08.523 0000:00:12.0: build_io_request_9 Invalid IO length parameter 00:09:08.523 0000:00:12.0: build_io_request_10 Invalid IO length parameter 00:09:08.523 0000:00:12.0: build_io_request_11 Invalid IO length parameter 00:09:08.523 NVMe Readv/Writev Request test 00:09:08.523 Attached to 0000:00:10.0 00:09:08.523 Attached to 0000:00:11.0 00:09:08.523 Attached to 0000:00:13.0 00:09:08.523 Attached to 0000:00:12.0 00:09:08.523 0000:00:10.0: build_io_request_2 test passed 00:09:08.523 0000:00:10.0: build_io_request_4 test passed 00:09:08.523 0000:00:10.0: build_io_request_5 test passed 00:09:08.523 0000:00:10.0: build_io_request_6 test passed 00:09:08.523 0000:00:10.0: build_io_request_7 test passed 00:09:08.523 0000:00:10.0: build_io_request_10 test passed 00:09:08.523 0000:00:11.0: build_io_request_2 test passed 00:09:08.523 0000:00:11.0: build_io_request_4 test passed 00:09:08.523 0000:00:11.0: build_io_request_5 test passed 00:09:08.523 0000:00:11.0: build_io_request_6 test passed 00:09:08.523 0000:00:11.0: build_io_request_7 test passed 00:09:08.523 0000:00:11.0: build_io_request_10 test passed 00:09:08.523 Cleaning up... 00:09:08.523 ************************************ 00:09:08.523 END TEST nvme_sgl 00:09:08.523 ************************************ 00:09:08.523 00:09:08.523 real 0m0.294s 00:09:08.523 user 0m0.121s 00:09:08.523 sys 0m0.123s 00:09:08.523 00:13:22 nvme.nvme_sgl -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:08.523 00:13:22 nvme.nvme_sgl -- common/autotest_common.sh@10 -- # set +x 00:09:08.523 00:13:23 nvme -- nvme/nvme.sh@89 -- # run_test nvme_e2edp /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:09:08.523 00:13:23 nvme -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:09:08.523 00:13:23 nvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:08.523 00:13:23 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:08.523 ************************************ 00:09:08.523 START TEST nvme_e2edp 00:09:08.523 ************************************ 00:09:08.523 00:13:23 nvme.nvme_e2edp -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:09:08.785 NVMe Write/Read with End-to-End data protection test 00:09:08.785 Attached to 0000:00:10.0 00:09:08.785 Attached to 0000:00:11.0 00:09:08.785 Attached to 0000:00:13.0 00:09:08.785 Attached to 0000:00:12.0 00:09:08.785 Cleaning up... 00:09:08.785 00:09:08.785 real 0m0.236s 00:09:08.785 user 0m0.070s 00:09:08.785 sys 0m0.124s 00:09:08.785 00:13:23 nvme.nvme_e2edp -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:08.785 ************************************ 00:09:08.785 00:13:23 nvme.nvme_e2edp -- common/autotest_common.sh@10 -- # set +x 00:09:08.785 END TEST nvme_e2edp 00:09:08.785 ************************************ 00:09:08.785 00:13:23 nvme -- nvme/nvme.sh@90 -- # run_test nvme_reserve /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:09:08.785 00:13:23 nvme -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:09:08.785 00:13:23 nvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:08.785 00:13:23 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:08.786 ************************************ 00:09:08.786 START TEST nvme_reserve 00:09:08.786 ************************************ 00:09:08.786 00:13:23 nvme.nvme_reserve -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:09:09.048 ===================================================== 00:09:09.048 NVMe Controller at PCI bus 0, device 16, function 0 00:09:09.048 ===================================================== 00:09:09.048 Reservations: Not Supported 00:09:09.048 ===================================================== 00:09:09.048 NVMe Controller at PCI bus 0, device 17, function 0 00:09:09.048 ===================================================== 00:09:09.048 Reservations: Not Supported 00:09:09.048 ===================================================== 00:09:09.048 NVMe Controller at PCI bus 0, device 19, function 0 00:09:09.048 ===================================================== 00:09:09.048 Reservations: Not Supported 00:09:09.048 ===================================================== 00:09:09.048 NVMe Controller at PCI bus 0, device 18, function 0 00:09:09.048 ===================================================== 00:09:09.048 Reservations: Not Supported 00:09:09.048 Reservation test passed 00:09:09.048 ************************************ 00:09:09.048 END TEST nvme_reserve 00:09:09.048 ************************************ 00:09:09.048 00:09:09.048 real 0m0.249s 00:09:09.048 user 0m0.086s 00:09:09.048 sys 0m0.118s 00:09:09.048 00:13:23 nvme.nvme_reserve -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:09.048 00:13:23 nvme.nvme_reserve -- common/autotest_common.sh@10 -- # set +x 00:09:09.048 00:13:23 nvme -- nvme/nvme.sh@91 -- # run_test nvme_err_injection /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:09:09.048 00:13:23 nvme -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:09:09.048 00:13:23 nvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:09.048 00:13:23 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:09.048 ************************************ 00:09:09.048 START TEST nvme_err_injection 00:09:09.048 ************************************ 00:09:09.048 00:13:23 nvme.nvme_err_injection -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:09:09.307 NVMe Error Injection test 00:09:09.307 Attached to 0000:00:10.0 00:09:09.307 Attached to 0000:00:11.0 00:09:09.307 Attached to 0000:00:13.0 00:09:09.307 Attached to 0000:00:12.0 00:09:09.307 0000:00:10.0: get features failed as expected 00:09:09.307 0000:00:11.0: get features failed as expected 00:09:09.307 0000:00:13.0: get features failed as expected 00:09:09.307 0000:00:12.0: get features failed as expected 00:09:09.307 0000:00:10.0: get features successfully as expected 00:09:09.307 0000:00:11.0: get features successfully as expected 00:09:09.307 0000:00:13.0: get features successfully as expected 00:09:09.307 0000:00:12.0: get features successfully as expected 00:09:09.307 0000:00:11.0: read failed as expected 00:09:09.307 0000:00:10.0: read failed as expected 00:09:09.307 0000:00:13.0: read failed as expected 00:09:09.307 0000:00:12.0: read failed as expected 00:09:09.307 0000:00:11.0: read successfully as expected 00:09:09.307 0000:00:13.0: read successfully as expected 00:09:09.307 0000:00:10.0: read successfully as expected 00:09:09.307 0000:00:12.0: read successfully as expected 00:09:09.307 Cleaning up... 00:09:09.307 00:09:09.307 real 0m0.250s 00:09:09.307 user 0m0.085s 00:09:09.307 sys 0m0.123s 00:09:09.307 ************************************ 00:09:09.307 END TEST nvme_err_injection 00:09:09.307 ************************************ 00:09:09.307 00:13:23 nvme.nvme_err_injection -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:09.307 00:13:23 nvme.nvme_err_injection -- common/autotest_common.sh@10 -- # set +x 00:09:09.566 00:13:23 nvme -- nvme/nvme.sh@92 -- # run_test nvme_overhead /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:09:09.566 00:13:23 nvme -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:09:09.566 00:13:23 nvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:09.566 00:13:23 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:09.566 ************************************ 00:09:09.566 START TEST nvme_overhead 00:09:09.566 ************************************ 00:09:09.566 00:13:24 nvme.nvme_overhead -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:09:10.947 Initializing NVMe Controllers 00:09:10.947 Attached to 0000:00:10.0 00:09:10.947 Attached to 0000:00:11.0 00:09:10.947 Attached to 0000:00:13.0 00:09:10.947 Attached to 0000:00:12.0 00:09:10.947 Initialization complete. Launching workers. 00:09:10.947 submit (in ns) avg, min, max = 13563.1, 12232.1, 121904.4 00:09:10.947 complete (in ns) avg, min, max = 9111.4, 8359.8, 129698.0 00:09:10.947 00:09:10.947 Submit histogram 00:09:10.947 ================ 00:09:10.947 Range in us Cumulative Count 00:09:10.947 12.183 - 12.235: 0.0169% ( 1) 00:09:10.947 12.286 - 12.337: 0.0339% ( 1) 00:09:10.947 12.337 - 12.389: 0.1355% ( 6) 00:09:10.947 12.389 - 12.440: 0.3556% ( 13) 00:09:10.947 12.440 - 12.492: 0.6603% ( 18) 00:09:10.947 12.492 - 12.543: 1.3038% ( 38) 00:09:10.947 12.543 - 12.594: 1.9810% ( 40) 00:09:10.947 12.594 - 12.646: 3.3187% ( 79) 00:09:10.947 12.646 - 12.697: 5.2320% ( 113) 00:09:10.947 12.697 - 12.749: 8.3644% ( 185) 00:09:10.947 12.749 - 12.800: 13.0207% ( 275) 00:09:10.947 12.800 - 12.851: 17.8801% ( 287) 00:09:10.947 12.851 - 12.903: 24.3651% ( 383) 00:09:10.947 12.903 - 12.954: 32.5601% ( 484) 00:09:10.947 12.954 - 13.006: 40.0271% ( 441) 00:09:10.947 13.006 - 13.057: 46.9184% ( 407) 00:09:10.947 13.057 - 13.108: 53.6234% ( 396) 00:09:10.947 13.108 - 13.160: 59.5327% ( 349) 00:09:10.947 13.160 - 13.263: 70.3691% ( 640) 00:09:10.947 13.263 - 13.365: 79.5632% ( 543) 00:09:10.947 13.365 - 13.468: 85.9296% ( 376) 00:09:10.947 13.468 - 13.571: 89.5699% ( 215) 00:09:10.947 13.571 - 13.674: 91.7880% ( 131) 00:09:10.947 13.674 - 13.777: 92.7531% ( 57) 00:09:10.947 13.777 - 13.880: 93.3965% ( 38) 00:09:10.947 13.880 - 13.982: 93.7521% ( 21) 00:09:10.947 13.982 - 14.085: 93.8876% ( 8) 00:09:10.947 14.085 - 14.188: 93.9553% ( 4) 00:09:10.947 14.188 - 14.291: 93.9892% ( 2) 00:09:10.947 14.291 - 14.394: 94.0230% ( 2) 00:09:10.947 14.394 - 14.496: 94.0400% ( 1) 00:09:10.947 14.496 - 14.599: 94.0569% ( 1) 00:09:10.947 14.805 - 14.908: 94.0908% ( 2) 00:09:10.947 14.908 - 15.010: 94.1077% ( 1) 00:09:10.947 15.422 - 15.524: 94.1246% ( 1) 00:09:10.947 15.833 - 15.936: 94.1416% ( 1) 00:09:10.947 16.039 - 16.141: 94.1585% ( 1) 00:09:10.947 16.450 - 16.553: 94.1754% ( 1) 00:09:10.947 16.758 - 16.861: 94.1923% ( 1) 00:09:10.947 16.861 - 16.964: 94.2093% ( 1) 00:09:10.947 17.067 - 17.169: 94.2431% ( 2) 00:09:10.947 17.169 - 17.272: 94.2770% ( 2) 00:09:10.947 17.272 - 17.375: 94.3109% ( 2) 00:09:10.947 17.478 - 17.581: 94.3447% ( 2) 00:09:10.947 17.581 - 17.684: 94.3617% ( 1) 00:09:10.947 17.684 - 17.786: 94.4802% ( 7) 00:09:10.947 17.786 - 17.889: 94.5310% ( 3) 00:09:10.947 17.889 - 17.992: 94.5987% ( 4) 00:09:10.947 17.992 - 18.095: 94.7003% ( 6) 00:09:10.947 18.095 - 18.198: 94.9204% ( 13) 00:09:10.947 18.198 - 18.300: 95.0897% ( 10) 00:09:10.947 18.300 - 18.403: 95.3776% ( 17) 00:09:10.947 18.403 - 18.506: 95.6824% ( 18) 00:09:10.947 18.506 - 18.609: 95.9025% ( 13) 00:09:10.947 18.609 - 18.712: 96.0887% ( 11) 00:09:10.947 18.712 - 18.814: 96.3258% ( 14) 00:09:10.947 18.814 - 18.917: 96.6136% ( 17) 00:09:10.947 18.917 - 19.020: 96.8507% ( 14) 00:09:10.947 19.020 - 19.123: 97.0538% ( 12) 00:09:10.947 19.123 - 19.226: 97.3248% ( 16) 00:09:10.947 19.226 - 19.329: 97.5279% ( 12) 00:09:10.947 19.329 - 19.431: 97.6465% ( 7) 00:09:10.947 19.431 - 19.534: 97.8327% ( 11) 00:09:10.947 19.534 - 19.637: 97.9851% ( 9) 00:09:10.947 19.637 - 19.740: 98.0867% ( 6) 00:09:10.947 19.740 - 19.843: 98.1714% ( 5) 00:09:10.947 19.843 - 19.945: 98.2729% ( 6) 00:09:10.947 19.945 - 20.048: 98.3068% ( 2) 00:09:10.948 20.048 - 20.151: 98.3407% ( 2) 00:09:10.948 20.151 - 20.254: 98.3576% ( 1) 00:09:10.948 20.254 - 20.357: 98.3745% ( 1) 00:09:10.948 20.357 - 20.459: 98.3915% ( 1) 00:09:10.948 20.459 - 20.562: 98.4084% ( 1) 00:09:10.948 20.562 - 20.665: 98.4253% ( 1) 00:09:10.948 20.665 - 20.768: 98.4592% ( 2) 00:09:10.948 20.768 - 20.871: 98.4761% ( 1) 00:09:10.948 20.871 - 20.973: 98.4931% ( 1) 00:09:10.948 20.973 - 21.076: 98.5100% ( 1) 00:09:10.948 21.076 - 21.179: 98.5269% ( 1) 00:09:10.948 21.179 - 21.282: 98.5777% ( 3) 00:09:10.948 21.282 - 21.385: 98.6454% ( 4) 00:09:10.948 21.385 - 21.488: 98.6793% ( 2) 00:09:10.948 21.488 - 21.590: 98.6962% ( 1) 00:09:10.948 21.590 - 21.693: 98.7132% ( 1) 00:09:10.948 21.693 - 21.796: 98.7470% ( 2) 00:09:10.948 21.796 - 21.899: 98.7809% ( 2) 00:09:10.948 21.899 - 22.002: 98.7978% ( 1) 00:09:10.948 22.002 - 22.104: 98.8317% ( 2) 00:09:10.948 22.207 - 22.310: 98.8656% ( 2) 00:09:10.948 22.310 - 22.413: 98.8825% ( 1) 00:09:10.948 22.516 - 22.618: 98.9164% ( 2) 00:09:10.948 22.618 - 22.721: 98.9502% ( 2) 00:09:10.948 22.721 - 22.824: 98.9672% ( 1) 00:09:10.948 22.927 - 23.030: 98.9841% ( 1) 00:09:10.948 23.030 - 23.133: 99.0010% ( 1) 00:09:10.948 23.133 - 23.235: 99.0518% ( 3) 00:09:10.948 23.338 - 23.441: 99.1195% ( 4) 00:09:10.948 23.441 - 23.544: 99.1534% ( 2) 00:09:10.948 23.544 - 23.647: 99.1873% ( 2) 00:09:10.948 23.749 - 23.852: 99.2211% ( 2) 00:09:10.948 23.852 - 23.955: 99.2719% ( 3) 00:09:10.948 23.955 - 24.058: 99.2889% ( 1) 00:09:10.948 24.058 - 24.161: 99.3058% ( 1) 00:09:10.948 24.675 - 24.778: 99.3227% ( 1) 00:09:10.948 24.778 - 24.880: 99.3397% ( 1) 00:09:10.948 24.880 - 24.983: 99.3735% ( 2) 00:09:10.948 24.983 - 25.086: 99.3905% ( 1) 00:09:10.948 25.189 - 25.292: 99.4582% ( 4) 00:09:10.948 26.217 - 26.320: 99.4751% ( 1) 00:09:10.948 26.320 - 26.525: 99.4920% ( 1) 00:09:10.948 26.731 - 26.937: 99.5090% ( 1) 00:09:10.948 27.348 - 27.553: 99.5259% ( 1) 00:09:10.948 28.376 - 28.582: 99.5598% ( 2) 00:09:10.948 28.582 - 28.787: 99.5767% ( 1) 00:09:10.948 28.787 - 28.993: 99.6275% ( 3) 00:09:10.948 28.993 - 29.198: 99.6614% ( 2) 00:09:10.948 29.404 - 29.610: 99.6783% ( 1) 00:09:10.948 29.610 - 29.815: 99.6952% ( 1) 00:09:10.948 29.815 - 30.021: 99.7122% ( 1) 00:09:10.948 30.021 - 30.227: 99.7460% ( 2) 00:09:10.948 32.283 - 32.488: 99.7630% ( 1) 00:09:10.948 38.040 - 38.246: 99.7799% ( 1) 00:09:10.948 38.657 - 38.863: 99.7968% ( 1) 00:09:10.948 38.863 - 39.068: 99.8137% ( 1) 00:09:10.948 39.274 - 39.480: 99.8476% ( 2) 00:09:10.948 40.096 - 40.302: 99.8645% ( 1) 00:09:10.948 41.741 - 41.947: 99.8815% ( 1) 00:09:10.948 42.975 - 43.181: 99.8984% ( 1) 00:09:10.948 44.003 - 44.209: 99.9153% ( 1) 00:09:10.948 44.620 - 44.826: 99.9323% ( 1) 00:09:10.948 45.443 - 45.648: 99.9492% ( 1) 00:09:10.948 49.761 - 49.966: 99.9661% ( 1) 00:09:10.948 52.639 - 53.051: 99.9831% ( 1) 00:09:10.948 121.729 - 122.551: 100.0000% ( 1) 00:09:10.948 00:09:10.948 Complete histogram 00:09:10.948 ================== 00:09:10.948 Range in us Cumulative Count 00:09:10.948 8.328 - 8.379: 0.0339% ( 2) 00:09:10.948 8.379 - 8.431: 0.4402% ( 24) 00:09:10.948 8.431 - 8.482: 0.7958% ( 21) 00:09:10.948 8.482 - 8.533: 1.8794% ( 64) 00:09:10.948 8.533 - 8.585: 4.2330% ( 139) 00:09:10.948 8.585 - 8.636: 7.8395% ( 213) 00:09:10.948 8.636 - 8.688: 11.6830% ( 227) 00:09:10.948 8.688 - 8.739: 19.5225% ( 463) 00:09:10.948 8.739 - 8.790: 33.9655% ( 853) 00:09:10.948 8.790 - 8.842: 46.8676% ( 762) 00:09:10.948 8.842 - 8.893: 56.9929% ( 598) 00:09:10.948 8.893 - 8.945: 65.9329% ( 528) 00:09:10.948 8.945 - 8.996: 73.9248% ( 472) 00:09:10.948 8.996 - 9.047: 79.3938% ( 323) 00:09:10.948 9.047 - 9.099: 83.9316% ( 268) 00:09:10.948 9.099 - 9.150: 87.2503% ( 196) 00:09:10.948 9.150 - 9.202: 89.8578% ( 154) 00:09:10.948 9.202 - 9.253: 91.7880% ( 114) 00:09:10.948 9.253 - 9.304: 93.1087% ( 78) 00:09:10.948 9.304 - 9.356: 93.9892% ( 52) 00:09:10.948 9.356 - 9.407: 94.8358% ( 50) 00:09:10.948 9.407 - 9.459: 95.3607% ( 31) 00:09:10.948 9.459 - 9.510: 95.6654% ( 18) 00:09:10.948 9.510 - 9.561: 96.0718% ( 24) 00:09:10.948 9.561 - 9.613: 96.3088% ( 14) 00:09:10.948 9.613 - 9.664: 96.5967% ( 17) 00:09:10.948 9.664 - 9.716: 96.8337% ( 14) 00:09:10.948 9.716 - 9.767: 96.9692% ( 8) 00:09:10.948 9.767 - 9.818: 97.0708% ( 6) 00:09:10.948 9.818 - 9.870: 97.1724% ( 6) 00:09:10.948 9.870 - 9.921: 97.2401% ( 4) 00:09:10.948 9.921 - 9.973: 97.3248% ( 5) 00:09:10.948 9.973 - 10.024: 97.4094% ( 5) 00:09:10.948 10.024 - 10.076: 97.4263% ( 1) 00:09:10.948 10.076 - 10.127: 97.4941% ( 4) 00:09:10.948 10.127 - 10.178: 97.5110% ( 1) 00:09:10.948 10.230 - 10.281: 97.5787% ( 4) 00:09:10.948 10.281 - 10.333: 97.5957% ( 1) 00:09:10.948 10.435 - 10.487: 97.6126% ( 1) 00:09:10.948 10.538 - 10.590: 97.6295% ( 1) 00:09:10.948 10.641 - 10.692: 97.6465% ( 1) 00:09:10.948 10.847 - 10.898: 97.6634% ( 1) 00:09:10.948 11.001 - 11.052: 97.6803% ( 1) 00:09:10.948 11.258 - 11.309: 97.6973% ( 1) 00:09:10.948 11.720 - 11.772: 97.7142% ( 1) 00:09:10.948 11.823 - 11.875: 97.7311% ( 1) 00:09:10.948 12.337 - 12.389: 97.7481% ( 1) 00:09:10.948 13.263 - 13.365: 97.7650% ( 1) 00:09:10.948 13.468 - 13.571: 97.7988% ( 2) 00:09:10.948 13.982 - 14.085: 97.8158% ( 1) 00:09:10.948 14.291 - 14.394: 97.8327% ( 1) 00:09:10.948 14.394 - 14.496: 97.8666% ( 2) 00:09:10.948 14.599 - 14.702: 97.9343% ( 4) 00:09:10.948 14.702 - 14.805: 98.0359% ( 6) 00:09:10.948 14.805 - 14.908: 98.1544% ( 7) 00:09:10.948 14.908 - 15.010: 98.2899% ( 8) 00:09:10.948 15.010 - 15.113: 98.5100% ( 13) 00:09:10.948 15.113 - 15.216: 98.7640% ( 15) 00:09:10.948 15.216 - 15.319: 98.8825% ( 7) 00:09:10.948 15.319 - 15.422: 98.9841% ( 6) 00:09:10.948 15.422 - 15.524: 99.0687% ( 5) 00:09:10.948 15.524 - 15.627: 99.1365% ( 4) 00:09:10.948 15.627 - 15.730: 99.1534% ( 1) 00:09:10.948 15.730 - 15.833: 99.1703% ( 1) 00:09:10.948 15.833 - 15.936: 99.2211% ( 3) 00:09:10.948 15.936 - 16.039: 99.2381% ( 1) 00:09:10.948 16.039 - 16.141: 99.2550% ( 1) 00:09:10.948 16.141 - 16.244: 99.2889% ( 2) 00:09:10.948 16.244 - 16.347: 99.3735% ( 5) 00:09:10.948 16.347 - 16.450: 99.3905% ( 1) 00:09:10.948 16.861 - 16.964: 99.4074% ( 1) 00:09:10.948 17.067 - 17.169: 99.4243% ( 1) 00:09:10.948 17.478 - 17.581: 99.4412% ( 1) 00:09:10.948 17.786 - 17.889: 99.4582% ( 1) 00:09:10.948 18.095 - 18.198: 99.4751% ( 1) 00:09:10.948 18.917 - 19.020: 99.5090% ( 2) 00:09:10.948 19.020 - 19.123: 99.5259% ( 1) 00:09:10.948 19.123 - 19.226: 99.5428% ( 1) 00:09:10.948 19.226 - 19.329: 99.5936% ( 3) 00:09:10.948 19.329 - 19.431: 99.6106% ( 1) 00:09:10.948 19.431 - 19.534: 99.6275% ( 1) 00:09:10.948 19.637 - 19.740: 99.6444% ( 1) 00:09:10.948 19.740 - 19.843: 99.6614% ( 1) 00:09:10.948 20.151 - 20.254: 99.6783% ( 1) 00:09:10.948 20.562 - 20.665: 99.6952% ( 1) 00:09:10.948 20.768 - 20.871: 99.7122% ( 1) 00:09:10.948 22.104 - 22.207: 99.7291% ( 1) 00:09:10.948 23.235 - 23.338: 99.7460% ( 1) 00:09:10.948 23.338 - 23.441: 99.7630% ( 1) 00:09:10.948 24.161 - 24.263: 99.7799% ( 1) 00:09:10.948 24.675 - 24.778: 99.8137% ( 2) 00:09:10.948 24.778 - 24.880: 99.8307% ( 1) 00:09:10.948 24.983 - 25.086: 99.8476% ( 1) 00:09:10.948 25.086 - 25.189: 99.8645% ( 1) 00:09:10.948 26.011 - 26.114: 99.8815% ( 1) 00:09:10.948 26.937 - 27.142: 99.8984% ( 1) 00:09:10.948 28.170 - 28.376: 99.9153% ( 1) 00:09:10.948 28.993 - 29.198: 99.9323% ( 1) 00:09:10.948 31.049 - 31.255: 99.9492% ( 1) 00:09:10.948 34.133 - 34.339: 99.9661% ( 1) 00:09:10.948 103.634 - 104.045: 99.9831% ( 1) 00:09:10.948 129.131 - 129.953: 100.0000% ( 1) 00:09:10.948 00:09:10.948 ************************************ 00:09:10.948 END TEST nvme_overhead 00:09:10.948 ************************************ 00:09:10.948 00:09:10.948 real 0m1.252s 00:09:10.948 user 0m1.080s 00:09:10.948 sys 0m0.124s 00:09:10.948 00:13:25 nvme.nvme_overhead -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:10.948 00:13:25 nvme.nvme_overhead -- common/autotest_common.sh@10 -- # set +x 00:09:10.948 00:13:25 nvme -- nvme/nvme.sh@93 -- # run_test nvme_arbitration /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:09:10.948 00:13:25 nvme -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:09:10.948 00:13:25 nvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:10.948 00:13:25 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:10.949 ************************************ 00:09:10.949 START TEST nvme_arbitration 00:09:10.949 ************************************ 00:09:10.949 00:13:25 nvme.nvme_arbitration -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:09:14.238 Initializing NVMe Controllers 00:09:14.238 Attached to 0000:00:10.0 00:09:14.238 Attached to 0000:00:11.0 00:09:14.238 Attached to 0000:00:13.0 00:09:14.238 Attached to 0000:00:12.0 00:09:14.238 Associating QEMU NVMe Ctrl (12340 ) with lcore 0 00:09:14.238 Associating QEMU NVMe Ctrl (12341 ) with lcore 1 00:09:14.238 Associating QEMU NVMe Ctrl (12343 ) with lcore 2 00:09:14.238 Associating QEMU NVMe Ctrl (12342 ) with lcore 3 00:09:14.238 Associating QEMU NVMe Ctrl (12342 ) with lcore 0 00:09:14.238 Associating QEMU NVMe Ctrl (12342 ) with lcore 1 00:09:14.238 /home/vagrant/spdk_repo/spdk/build/examples/arbitration run with configuration: 00:09:14.238 /home/vagrant/spdk_repo/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i 0 00:09:14.238 Initialization complete. Launching workers. 00:09:14.238 Starting thread on core 1 with urgent priority queue 00:09:14.238 Starting thread on core 2 with urgent priority queue 00:09:14.238 Starting thread on core 3 with urgent priority queue 00:09:14.238 Starting thread on core 0 with urgent priority queue 00:09:14.238 QEMU NVMe Ctrl (12340 ) core 0: 4672.00 IO/s 21.40 secs/100000 ios 00:09:14.238 QEMU NVMe Ctrl (12342 ) core 0: 4672.00 IO/s 21.40 secs/100000 ios 00:09:14.238 QEMU NVMe Ctrl (12341 ) core 1: 4714.67 IO/s 21.21 secs/100000 ios 00:09:14.238 QEMU NVMe Ctrl (12342 ) core 1: 4714.67 IO/s 21.21 secs/100000 ios 00:09:14.238 QEMU NVMe Ctrl (12343 ) core 2: 4864.00 IO/s 20.56 secs/100000 ios 00:09:14.238 QEMU NVMe Ctrl (12342 ) core 3: 4522.67 IO/s 22.11 secs/100000 ios 00:09:14.238 ======================================================== 00:09:14.238 00:09:14.238 00:09:14.238 real 0m3.262s 00:09:14.238 user 0m9.057s 00:09:14.238 sys 0m0.127s 00:09:14.238 ************************************ 00:09:14.238 END TEST nvme_arbitration 00:09:14.238 ************************************ 00:09:14.238 00:13:28 nvme.nvme_arbitration -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:14.238 00:13:28 nvme.nvme_arbitration -- common/autotest_common.sh@10 -- # set +x 00:09:14.238 00:13:28 nvme -- nvme/nvme.sh@94 -- # run_test nvme_single_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:09:14.238 00:13:28 nvme -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:09:14.238 00:13:28 nvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:14.238 00:13:28 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:14.238 ************************************ 00:09:14.238 START TEST nvme_single_aen 00:09:14.238 ************************************ 00:09:14.238 00:13:28 nvme.nvme_single_aen -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:09:14.238 Asynchronous Event Request test 00:09:14.238 Attached to 0000:00:10.0 00:09:14.238 Attached to 0000:00:11.0 00:09:14.238 Attached to 0000:00:13.0 00:09:14.238 Attached to 0000:00:12.0 00:09:14.238 Reset controller to setup AER completions for this process 00:09:14.238 Registering asynchronous event callbacks... 00:09:14.238 Getting orig temperature thresholds of all controllers 00:09:14.238 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:14.238 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:14.238 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:14.238 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:14.238 Setting all controllers temperature threshold low to trigger AER 00:09:14.238 Waiting for all controllers temperature threshold to be set lower 00:09:14.238 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:14.238 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:09:14.238 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:14.238 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:09:14.238 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:14.238 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:09:14.238 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:14.238 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:09:14.238 Waiting for all controllers to trigger AER and reset threshold 00:09:14.238 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:14.238 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:14.238 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:14.238 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:14.238 Cleaning up... 00:09:14.238 00:09:14.238 real 0m0.241s 00:09:14.238 user 0m0.078s 00:09:14.238 sys 0m0.112s 00:09:14.238 00:13:28 nvme.nvme_single_aen -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:14.238 00:13:28 nvme.nvme_single_aen -- common/autotest_common.sh@10 -- # set +x 00:09:14.238 ************************************ 00:09:14.238 END TEST nvme_single_aen 00:09:14.238 ************************************ 00:09:14.498 00:13:28 nvme -- nvme/nvme.sh@95 -- # run_test nvme_doorbell_aers nvme_doorbell_aers 00:09:14.498 00:13:28 nvme -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:09:14.498 00:13:28 nvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:14.498 00:13:28 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:14.498 ************************************ 00:09:14.498 START TEST nvme_doorbell_aers 00:09:14.498 ************************************ 00:09:14.498 00:13:28 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1121 -- # nvme_doorbell_aers 00:09:14.498 00:13:28 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # bdfs=() 00:09:14.498 00:13:28 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # local bdfs bdf 00:09:14.498 00:13:28 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # bdfs=($(get_nvme_bdfs)) 00:09:14.498 00:13:28 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # get_nvme_bdfs 00:09:14.498 00:13:28 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1509 -- # bdfs=() 00:09:14.498 00:13:28 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1509 -- # local bdfs 00:09:14.498 00:13:28 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1510 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:09:14.498 00:13:28 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1510 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:09:14.498 00:13:28 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1510 -- # jq -r '.config[].params.traddr' 00:09:14.498 00:13:29 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1511 -- # (( 4 == 0 )) 00:09:14.498 00:13:29 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1515 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:09:14.498 00:13:29 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:09:14.498 00:13:29 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:10.0' 00:09:14.757 [2024-07-23 00:13:29.352205] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 80432) is not found. Dropping the request. 00:09:24.729 Executing: test_write_invalid_db 00:09:24.729 Waiting for AER completion... 00:09:24.729 Failure: test_write_invalid_db 00:09:24.729 00:09:24.729 Executing: test_invalid_db_write_overflow_sq 00:09:24.729 Waiting for AER completion... 00:09:24.729 Failure: test_invalid_db_write_overflow_sq 00:09:24.729 00:09:24.729 Executing: test_invalid_db_write_overflow_cq 00:09:24.729 Waiting for AER completion... 00:09:24.729 Failure: test_invalid_db_write_overflow_cq 00:09:24.729 00:09:24.729 00:13:39 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:09:24.729 00:13:39 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:11.0' 00:09:24.729 [2024-07-23 00:13:39.388961] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 80432) is not found. Dropping the request. 00:09:34.713 Executing: test_write_invalid_db 00:09:34.713 Waiting for AER completion... 00:09:34.713 Failure: test_write_invalid_db 00:09:34.713 00:09:34.713 Executing: test_invalid_db_write_overflow_sq 00:09:34.713 Waiting for AER completion... 00:09:34.713 Failure: test_invalid_db_write_overflow_sq 00:09:34.713 00:09:34.713 Executing: test_invalid_db_write_overflow_cq 00:09:34.713 Waiting for AER completion... 00:09:34.713 Failure: test_invalid_db_write_overflow_cq 00:09:34.713 00:09:34.713 00:13:49 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:09:34.713 00:13:49 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:12.0' 00:09:34.972 [2024-07-23 00:13:49.426887] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 80432) is not found. Dropping the request. 00:09:44.988 Executing: test_write_invalid_db 00:09:44.988 Waiting for AER completion... 00:09:44.988 Failure: test_write_invalid_db 00:09:44.988 00:09:44.988 Executing: test_invalid_db_write_overflow_sq 00:09:44.988 Waiting for AER completion... 00:09:44.988 Failure: test_invalid_db_write_overflow_sq 00:09:44.988 00:09:44.988 Executing: test_invalid_db_write_overflow_cq 00:09:44.988 Waiting for AER completion... 00:09:44.988 Failure: test_invalid_db_write_overflow_cq 00:09:44.988 00:09:44.988 00:13:59 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:09:44.988 00:13:59 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:13.0' 00:09:44.988 [2024-07-23 00:13:59.488517] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 80432) is not found. Dropping the request. 00:09:54.962 Executing: test_write_invalid_db 00:09:54.962 Waiting for AER completion... 00:09:54.962 Failure: test_write_invalid_db 00:09:54.962 00:09:54.962 Executing: test_invalid_db_write_overflow_sq 00:09:54.962 Waiting for AER completion... 00:09:54.962 Failure: test_invalid_db_write_overflow_sq 00:09:54.962 00:09:54.962 Executing: test_invalid_db_write_overflow_cq 00:09:54.962 Waiting for AER completion... 00:09:54.962 Failure: test_invalid_db_write_overflow_cq 00:09:54.962 00:09:54.962 00:09:54.962 real 0m40.298s 00:09:54.962 user 0m29.600s 00:09:54.962 sys 0m10.326s 00:09:54.962 00:14:09 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:54.962 00:14:09 nvme.nvme_doorbell_aers -- common/autotest_common.sh@10 -- # set +x 00:09:54.962 ************************************ 00:09:54.962 END TEST nvme_doorbell_aers 00:09:54.962 ************************************ 00:09:54.962 00:14:09 nvme -- nvme/nvme.sh@97 -- # uname 00:09:54.962 00:14:09 nvme -- nvme/nvme.sh@97 -- # '[' Linux '!=' FreeBSD ']' 00:09:54.962 00:14:09 nvme -- nvme/nvme.sh@98 -- # run_test nvme_multi_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:09:54.962 00:14:09 nvme -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:09:54.962 00:14:09 nvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:54.962 00:14:09 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:54.962 ************************************ 00:09:54.962 START TEST nvme_multi_aen 00:09:54.962 ************************************ 00:09:54.962 00:14:09 nvme.nvme_multi_aen -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:09:54.962 [2024-07-23 00:14:09.567248] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 80432) is not found. Dropping the request. 00:09:54.962 [2024-07-23 00:14:09.567376] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 80432) is not found. Dropping the request. 00:09:54.962 [2024-07-23 00:14:09.567398] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 80432) is not found. Dropping the request. 00:09:54.962 [2024-07-23 00:14:09.568996] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 80432) is not found. Dropping the request. 00:09:54.963 [2024-07-23 00:14:09.569041] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 80432) is not found. Dropping the request. 00:09:54.963 [2024-07-23 00:14:09.569059] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 80432) is not found. Dropping the request. 00:09:54.963 [2024-07-23 00:14:09.570455] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 80432) is not found. Dropping the request. 00:09:54.963 [2024-07-23 00:14:09.570550] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 80432) is not found. Dropping the request. 00:09:54.963 [2024-07-23 00:14:09.570607] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 80432) is not found. Dropping the request. 00:09:54.963 [2024-07-23 00:14:09.571900] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 80432) is not found. Dropping the request. 00:09:54.963 [2024-07-23 00:14:09.572065] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 80432) is not found. Dropping the request. 00:09:54.963 [2024-07-23 00:14:09.572176] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 80432) is not found. Dropping the request. 00:09:54.963 Child process pid: 80953 00:09:55.222 [Child] Asynchronous Event Request test 00:09:55.222 [Child] Attached to 0000:00:10.0 00:09:55.222 [Child] Attached to 0000:00:11.0 00:09:55.222 [Child] Attached to 0000:00:13.0 00:09:55.222 [Child] Attached to 0000:00:12.0 00:09:55.222 [Child] Registering asynchronous event callbacks... 00:09:55.222 [Child] Getting orig temperature thresholds of all controllers 00:09:55.222 [Child] 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:55.222 [Child] 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:55.222 [Child] 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:55.222 [Child] 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:55.222 [Child] Waiting for all controllers to trigger AER and reset threshold 00:09:55.222 [Child] 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:55.222 [Child] 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:55.222 [Child] 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:55.222 [Child] 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:55.222 [Child] 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:55.222 [Child] 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:55.222 [Child] 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:55.222 [Child] 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:55.222 [Child] Cleaning up... 00:09:55.222 Asynchronous Event Request test 00:09:55.222 Attached to 0000:00:10.0 00:09:55.222 Attached to 0000:00:11.0 00:09:55.222 Attached to 0000:00:13.0 00:09:55.222 Attached to 0000:00:12.0 00:09:55.222 Reset controller to setup AER completions for this process 00:09:55.222 Registering asynchronous event callbacks... 00:09:55.222 Getting orig temperature thresholds of all controllers 00:09:55.222 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:55.222 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:55.222 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:55.222 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:55.222 Setting all controllers temperature threshold low to trigger AER 00:09:55.222 Waiting for all controllers temperature threshold to be set lower 00:09:55.222 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:55.222 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:09:55.222 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:55.222 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:09:55.222 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:55.222 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:09:55.222 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:55.222 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:09:55.222 Waiting for all controllers to trigger AER and reset threshold 00:09:55.222 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:55.222 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:55.222 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:55.222 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:55.222 Cleaning up... 00:09:55.222 00:09:55.222 real 0m0.512s 00:09:55.222 user 0m0.162s 00:09:55.222 sys 0m0.236s 00:09:55.222 00:14:09 nvme.nvme_multi_aen -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:55.222 00:14:09 nvme.nvme_multi_aen -- common/autotest_common.sh@10 -- # set +x 00:09:55.222 ************************************ 00:09:55.222 END TEST nvme_multi_aen 00:09:55.222 ************************************ 00:09:55.482 00:14:09 nvme -- nvme/nvme.sh@99 -- # run_test nvme_startup /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:09:55.482 00:14:09 nvme -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:09:55.482 00:14:09 nvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:55.482 00:14:09 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:55.482 ************************************ 00:09:55.482 START TEST nvme_startup 00:09:55.482 ************************************ 00:09:55.482 00:14:09 nvme.nvme_startup -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:09:55.482 Initializing NVMe Controllers 00:09:55.482 Attached to 0000:00:10.0 00:09:55.482 Attached to 0000:00:11.0 00:09:55.482 Attached to 0000:00:13.0 00:09:55.482 Attached to 0000:00:12.0 00:09:55.482 Initialization complete. 00:09:55.482 Time used:131900.109 (us). 00:09:55.482 ************************************ 00:09:55.482 END TEST nvme_startup 00:09:55.482 ************************************ 00:09:55.482 00:09:55.482 real 0m0.211s 00:09:55.482 user 0m0.068s 00:09:55.482 sys 0m0.106s 00:09:55.482 00:14:10 nvme.nvme_startup -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:55.482 00:14:10 nvme.nvme_startup -- common/autotest_common.sh@10 -- # set +x 00:09:55.741 00:14:10 nvme -- nvme/nvme.sh@100 -- # run_test nvme_multi_secondary nvme_multi_secondary 00:09:55.741 00:14:10 nvme -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:09:55.741 00:14:10 nvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:55.741 00:14:10 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:55.741 ************************************ 00:09:55.741 START TEST nvme_multi_secondary 00:09:55.741 ************************************ 00:09:55.741 00:14:10 nvme.nvme_multi_secondary -- common/autotest_common.sh@1121 -- # nvme_multi_secondary 00:09:55.741 00:14:10 nvme.nvme_multi_secondary -- nvme/nvme.sh@52 -- # pid0=81003 00:09:55.741 00:14:10 nvme.nvme_multi_secondary -- nvme/nvme.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x1 00:09:55.741 00:14:10 nvme.nvme_multi_secondary -- nvme/nvme.sh@54 -- # pid1=81005 00:09:55.741 00:14:10 nvme.nvme_multi_secondary -- nvme/nvme.sh@53 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:09:55.741 00:14:10 nvme.nvme_multi_secondary -- nvme/nvme.sh@55 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x4 00:09:59.027 Initializing NVMe Controllers 00:09:59.027 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:09:59.027 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:09:59.027 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:09:59.027 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:09:59.027 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:09:59.027 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:09:59.027 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:09:59.027 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:09:59.027 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:09:59.027 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:09:59.027 Initialization complete. Launching workers. 00:09:59.027 ======================================================== 00:09:59.027 Latency(us) 00:09:59.027 Device Information : IOPS MiB/s Average min max 00:09:59.027 PCIE (0000:00:10.0) NSID 1 from core 1: 4749.00 18.55 3366.32 1568.09 8250.60 00:09:59.027 PCIE (0000:00:11.0) NSID 1 from core 1: 4749.00 18.55 3368.22 1752.08 7955.38 00:09:59.027 PCIE (0000:00:13.0) NSID 1 from core 1: 4749.00 18.55 3368.71 1776.38 7713.94 00:09:59.027 PCIE (0000:00:12.0) NSID 1 from core 1: 4749.00 18.55 3369.09 1799.37 7572.75 00:09:59.027 PCIE (0000:00:12.0) NSID 2 from core 1: 4749.00 18.55 3369.41 1740.30 7697.50 00:09:59.027 PCIE (0000:00:12.0) NSID 3 from core 1: 4749.00 18.55 3369.44 1725.86 7668.71 00:09:59.027 ======================================================== 00:09:59.027 Total : 28494.00 111.30 3368.53 1568.09 8250.60 00:09:59.027 00:09:59.286 Initializing NVMe Controllers 00:09:59.286 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:09:59.286 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:09:59.286 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:09:59.286 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:09:59.286 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:09:59.286 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:09:59.286 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:09:59.286 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:09:59.286 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:09:59.286 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:09:59.286 Initialization complete. Launching workers. 00:09:59.286 ======================================================== 00:09:59.286 Latency(us) 00:09:59.286 Device Information : IOPS MiB/s Average min max 00:09:59.286 PCIE (0000:00:10.0) NSID 1 from core 2: 3326.53 12.99 4808.48 1134.67 14657.03 00:09:59.286 PCIE (0000:00:11.0) NSID 1 from core 2: 3326.53 12.99 4809.36 1182.01 16992.93 00:09:59.286 PCIE (0000:00:13.0) NSID 1 from core 2: 3326.53 12.99 4808.58 1130.95 17713.01 00:09:59.286 PCIE (0000:00:12.0) NSID 1 from core 2: 3326.53 12.99 4808.02 1070.68 18403.89 00:09:59.286 PCIE (0000:00:12.0) NSID 2 from core 2: 3326.53 12.99 4802.27 948.76 15122.18 00:09:59.286 PCIE (0000:00:12.0) NSID 3 from core 2: 3326.53 12.99 4802.64 878.03 14576.64 00:09:59.286 ======================================================== 00:09:59.286 Total : 19959.20 77.97 4806.56 878.03 18403.89 00:09:59.286 00:09:59.286 00:14:13 nvme.nvme_multi_secondary -- nvme/nvme.sh@56 -- # wait 81003 00:10:01.198 Initializing NVMe Controllers 00:10:01.198 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:10:01.198 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:10:01.198 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:10:01.198 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:10:01.198 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:10:01.198 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:10:01.198 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:10:01.198 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:10:01.198 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:10:01.198 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:10:01.198 Initialization complete. Launching workers. 00:10:01.198 ======================================================== 00:10:01.198 Latency(us) 00:10:01.198 Device Information : IOPS MiB/s Average min max 00:10:01.198 PCIE (0000:00:10.0) NSID 1 from core 0: 8392.03 32.78 1904.98 919.66 8077.68 00:10:01.198 PCIE (0000:00:11.0) NSID 1 from core 0: 8392.03 32.78 1906.07 938.88 8169.08 00:10:01.198 PCIE (0000:00:13.0) NSID 1 from core 0: 8392.03 32.78 1906.04 833.44 7902.34 00:10:01.198 PCIE (0000:00:12.0) NSID 1 from core 0: 8392.03 32.78 1906.02 727.69 7196.06 00:10:01.198 PCIE (0000:00:12.0) NSID 2 from core 0: 8392.03 32.78 1905.99 594.73 7429.99 00:10:01.198 PCIE (0000:00:12.0) NSID 3 from core 0: 8392.03 32.78 1905.96 478.40 8042.84 00:10:01.198 ======================================================== 00:10:01.198 Total : 50352.20 196.69 1905.84 478.40 8169.08 00:10:01.198 00:10:01.199 00:14:15 nvme.nvme_multi_secondary -- nvme/nvme.sh@57 -- # wait 81005 00:10:01.199 00:14:15 nvme.nvme_multi_secondary -- nvme/nvme.sh@61 -- # pid0=81079 00:10:01.199 00:14:15 nvme.nvme_multi_secondary -- nvme/nvme.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x1 00:10:01.199 00:14:15 nvme.nvme_multi_secondary -- nvme/nvme.sh@63 -- # pid1=81080 00:10:01.199 00:14:15 nvme.nvme_multi_secondary -- nvme/nvme.sh@62 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:10:01.199 00:14:15 nvme.nvme_multi_secondary -- nvme/nvme.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x4 00:10:04.498 Initializing NVMe Controllers 00:10:04.498 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:10:04.498 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:10:04.498 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:10:04.498 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:10:04.498 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:10:04.498 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:10:04.498 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:10:04.498 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:10:04.498 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:10:04.498 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:10:04.498 Initialization complete. Launching workers. 00:10:04.498 ======================================================== 00:10:04.498 Latency(us) 00:10:04.498 Device Information : IOPS MiB/s Average min max 00:10:04.498 PCIE (0000:00:10.0) NSID 1 from core 0: 5024.63 19.63 3181.71 1010.04 8374.14 00:10:04.498 PCIE (0000:00:11.0) NSID 1 from core 0: 5024.63 19.63 3183.53 1063.82 8760.82 00:10:04.498 PCIE (0000:00:13.0) NSID 1 from core 0: 5024.63 19.63 3183.29 1046.81 8543.08 00:10:04.498 PCIE (0000:00:12.0) NSID 1 from core 0: 5024.63 19.63 3183.24 1067.78 8053.81 00:10:04.498 PCIE (0000:00:12.0) NSID 2 from core 0: 5024.63 19.63 3183.24 1049.02 7123.92 00:10:04.498 PCIE (0000:00:12.0) NSID 3 from core 0: 5024.63 19.63 3183.38 1030.91 7674.18 00:10:04.498 ======================================================== 00:10:04.498 Total : 30147.77 117.76 3183.06 1010.04 8760.82 00:10:04.498 00:10:04.498 Initializing NVMe Controllers 00:10:04.498 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:10:04.498 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:10:04.498 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:10:04.498 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:10:04.498 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:10:04.498 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:10:04.498 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:10:04.498 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:10:04.498 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:10:04.498 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:10:04.498 Initialization complete. Launching workers. 00:10:04.498 ======================================================== 00:10:04.498 Latency(us) 00:10:04.498 Device Information : IOPS MiB/s Average min max 00:10:04.498 PCIE (0000:00:10.0) NSID 1 from core 1: 5156.68 20.14 3100.23 997.01 7877.08 00:10:04.498 PCIE (0000:00:11.0) NSID 1 from core 1: 5156.68 20.14 3102.03 1021.63 8494.03 00:10:04.498 PCIE (0000:00:13.0) NSID 1 from core 1: 5156.68 20.14 3102.34 1021.12 8997.85 00:10:04.498 PCIE (0000:00:12.0) NSID 1 from core 1: 5156.68 20.14 3102.39 1001.29 9014.81 00:10:04.498 PCIE (0000:00:12.0) NSID 2 from core 1: 5156.68 20.14 3102.28 1005.08 8817.00 00:10:04.498 PCIE (0000:00:12.0) NSID 3 from core 1: 5156.68 20.14 3102.17 1024.60 8533.36 00:10:04.498 ======================================================== 00:10:04.498 Total : 30940.09 120.86 3101.90 997.01 9014.81 00:10:04.498 00:10:06.403 Initializing NVMe Controllers 00:10:06.403 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:10:06.403 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:10:06.403 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:10:06.403 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:10:06.403 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:10:06.403 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:10:06.403 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:10:06.403 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:10:06.403 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:10:06.403 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:10:06.403 Initialization complete. Launching workers. 00:10:06.403 ======================================================== 00:10:06.403 Latency(us) 00:10:06.403 Device Information : IOPS MiB/s Average min max 00:10:06.403 PCIE (0000:00:10.0) NSID 1 from core 2: 3287.69 12.84 4865.19 1024.73 13252.81 00:10:06.403 PCIE (0000:00:11.0) NSID 1 from core 2: 3287.69 12.84 4866.47 1051.03 12627.58 00:10:06.403 PCIE (0000:00:13.0) NSID 1 from core 2: 3287.69 12.84 4866.40 1041.15 13478.50 00:10:06.403 PCIE (0000:00:12.0) NSID 1 from core 2: 3287.69 12.84 4866.08 1061.36 12674.02 00:10:06.403 PCIE (0000:00:12.0) NSID 2 from core 2: 3287.69 12.84 4866.23 1043.78 12515.58 00:10:06.403 PCIE (0000:00:12.0) NSID 3 from core 2: 3287.69 12.84 4866.16 819.48 12408.81 00:10:06.403 ======================================================== 00:10:06.403 Total : 19726.14 77.06 4866.09 819.48 13478.50 00:10:06.403 00:10:06.403 ************************************ 00:10:06.403 END TEST nvme_multi_secondary 00:10:06.403 ************************************ 00:10:06.403 00:14:20 nvme.nvme_multi_secondary -- nvme/nvme.sh@65 -- # wait 81079 00:10:06.403 00:14:20 nvme.nvme_multi_secondary -- nvme/nvme.sh@66 -- # wait 81080 00:10:06.403 00:10:06.403 real 0m10.502s 00:10:06.403 user 0m18.327s 00:10:06.403 sys 0m0.849s 00:10:06.403 00:14:20 nvme.nvme_multi_secondary -- common/autotest_common.sh@1122 -- # xtrace_disable 00:10:06.403 00:14:20 nvme.nvme_multi_secondary -- common/autotest_common.sh@10 -- # set +x 00:10:06.403 00:14:20 nvme -- nvme/nvme.sh@101 -- # trap - SIGINT SIGTERM EXIT 00:10:06.403 00:14:20 nvme -- nvme/nvme.sh@102 -- # kill_stub 00:10:06.403 00:14:20 nvme -- common/autotest_common.sh@1085 -- # [[ -e /proc/80029 ]] 00:10:06.403 00:14:20 nvme -- common/autotest_common.sh@1086 -- # kill 80029 00:10:06.403 00:14:20 nvme -- common/autotest_common.sh@1087 -- # wait 80029 00:10:06.403 [2024-07-23 00:14:20.793374] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 80952) is not found. Dropping the request. 00:10:06.403 [2024-07-23 00:14:20.793510] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 80952) is not found. Dropping the request. 00:10:06.403 [2024-07-23 00:14:20.793562] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 80952) is not found. Dropping the request. 00:10:06.403 [2024-07-23 00:14:20.793614] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 80952) is not found. Dropping the request. 00:10:06.403 [2024-07-23 00:14:20.796790] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 80952) is not found. Dropping the request. 00:10:06.403 [2024-07-23 00:14:20.796882] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 80952) is not found. Dropping the request. 00:10:06.403 [2024-07-23 00:14:20.796935] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 80952) is not found. Dropping the request. 00:10:06.403 [2024-07-23 00:14:20.796985] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 80952) is not found. Dropping the request. 00:10:06.403 [2024-07-23 00:14:20.798115] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 80952) is not found. Dropping the request. 00:10:06.403 [2024-07-23 00:14:20.798198] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 80952) is not found. Dropping the request. 00:10:06.403 [2024-07-23 00:14:20.798243] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 80952) is not found. Dropping the request. 00:10:06.403 [2024-07-23 00:14:20.798321] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 80952) is not found. Dropping the request. 00:10:06.403 [2024-07-23 00:14:20.799387] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 80952) is not found. Dropping the request. 00:10:06.403 [2024-07-23 00:14:20.799484] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 80952) is not found. Dropping the request. 00:10:06.403 [2024-07-23 00:14:20.799529] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 80952) is not found. Dropping the request. 00:10:06.403 [2024-07-23 00:14:20.799578] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 80952) is not found. Dropping the request. 00:10:06.403 00:14:20 nvme -- common/autotest_common.sh@1089 -- # rm -f /var/run/spdk_stub0 00:10:06.403 00:14:20 nvme -- common/autotest_common.sh@1093 -- # echo 2 00:10:06.403 00:14:20 nvme -- nvme/nvme.sh@105 -- # run_test bdev_nvme_reset_stuck_adm_cmd /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:10:06.403 00:14:20 nvme -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:10:06.403 00:14:20 nvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:10:06.403 00:14:20 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:06.403 ************************************ 00:10:06.403 START TEST bdev_nvme_reset_stuck_adm_cmd 00:10:06.403 ************************************ 00:10:06.403 00:14:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:10:06.403 * Looking for test storage... 00:10:06.403 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:10:06.403 00:14:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@18 -- # ctrlr_name=nvme0 00:10:06.403 00:14:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@20 -- # err_injection_timeout=15000000 00:10:06.403 00:14:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@22 -- # test_timeout=5 00:10:06.403 00:14:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@25 -- # err_injection_sct=0 00:10:06.403 00:14:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@27 -- # err_injection_sc=1 00:10:06.403 00:14:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # get_first_nvme_bdf 00:10:06.403 00:14:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1520 -- # bdfs=() 00:10:06.403 00:14:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1520 -- # local bdfs 00:10:06.403 00:14:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1521 -- # bdfs=($(get_nvme_bdfs)) 00:10:06.403 00:14:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1521 -- # get_nvme_bdfs 00:10:06.403 00:14:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1509 -- # bdfs=() 00:10:06.403 00:14:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1509 -- # local bdfs 00:10:06.403 00:14:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1510 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:10:06.404 00:14:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1510 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:10:06.404 00:14:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1510 -- # jq -r '.config[].params.traddr' 00:10:06.663 00:14:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1511 -- # (( 4 == 0 )) 00:10:06.663 00:14:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1515 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:10:06.663 00:14:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1523 -- # echo 0000:00:10.0 00:10:06.663 00:14:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # bdf=0000:00:10.0 00:10:06.663 00:14:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@30 -- # '[' -z 0000:00:10.0 ']' 00:10:06.663 00:14:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@36 -- # spdk_target_pid=81227 00:10:06.663 00:14:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0xF 00:10:06.663 00:14:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@37 -- # trap 'killprocess "$spdk_target_pid"; exit 1' SIGINT SIGTERM EXIT 00:10:06.663 00:14:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@38 -- # waitforlisten 81227 00:10:06.663 00:14:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@827 -- # '[' -z 81227 ']' 00:10:06.663 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:06.663 00:14:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:06.663 00:14:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@832 -- # local max_retries=100 00:10:06.663 00:14:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:06.663 00:14:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@836 -- # xtrace_disable 00:10:06.663 00:14:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:10:06.663 [2024-07-23 00:14:21.278977] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:10:06.663 [2024-07-23 00:14:21.279102] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81227 ] 00:10:06.922 [2024-07-23 00:14:21.446291] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:06.922 [2024-07-23 00:14:21.491126] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:06.922 [2024-07-23 00:14:21.491347] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:06.922 [2024-07-23 00:14:21.491492] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:10:06.922 [2024-07-23 00:14:21.491364] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:07.489 00:14:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:10:07.489 00:14:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@860 -- # return 0 00:10:07.489 00:14:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@40 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:10.0 00:10:07.489 00:14:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:07.489 00:14:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:10:07.489 nvme0n1 00:10:07.489 00:14:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:07.489 00:14:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # mktemp /tmp/err_inj_XXXXX.txt 00:10:07.489 00:14:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # tmp_file=/tmp/err_inj_VOfVc.txt 00:10:07.489 00:14:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@44 -- # rpc_cmd bdev_nvme_add_error_injection -n nvme0 --cmd-type admin --opc 10 --timeout-in-us 15000000 --err-count 1 --sct 0 --sc 1 --do_not_submit 00:10:07.489 00:14:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:07.489 00:14:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:10:07.489 true 00:10:07.489 00:14:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:07.489 00:14:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # date +%s 00:10:07.489 00:14:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # start_time=1721693662 00:10:07.489 00:14:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@51 -- # get_feat_pid=81250 00:10:07.489 00:14:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@52 -- # trap 'killprocess "$get_feat_pid"; exit 1' SIGINT SIGTERM EXIT 00:10:07.489 00:14:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_send_cmd -n nvme0 -t admin -r c2h -c CgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA== 00:10:07.489 00:14:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@55 -- # sleep 2 00:10:10.059 00:14:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@57 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:10:10.059 00:14:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:10.059 00:14:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:10:10.059 [2024-07-23 00:14:24.158405] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0] resetting controller 00:10:10.059 [2024-07-23 00:14:24.158935] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:10:10.059 [2024-07-23 00:14:24.159086] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:10:10.059 [2024-07-23 00:14:24.159198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:10.059 [2024-07-23 00:14:24.161667] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:10:10.059 00:14:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:10.059 00:14:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@59 -- # echo 'Waiting for RPC error injection (bdev_nvme_send_cmd) process PID:' 81250 00:10:10.059 Waiting for RPC error injection (bdev_nvme_send_cmd) process PID: 81250 00:10:10.059 00:14:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@60 -- # wait 81250 00:10:10.059 00:14:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # date +%s 00:10:10.059 00:14:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # diff_time=2 00:10:10.059 00:14:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@62 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:10:10.059 00:14:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:10.059 00:14:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:10:10.059 00:14:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:10.059 00:14:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@64 -- # trap - SIGINT SIGTERM EXIT 00:10:10.059 00:14:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # jq -r .cpl /tmp/err_inj_VOfVc.txt 00:10:10.059 00:14:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # spdk_nvme_status=AAAAAAAAAAAAAAAAAAACAA== 00:10:10.059 00:14:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 1 255 00:10:10.059 00:14:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:10:10.059 00:14:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:10:10.059 00:14:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:10:10.059 00:14:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:10:10.059 00:14:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:10:10.059 00:14:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:10:10.059 00:14:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 1 00:10:10.059 00:14:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # nvme_status_sc=0x1 00:10:10.059 00:14:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 9 3 00:10:10.059 00:14:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:10:10.059 00:14:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:10:10.059 00:14:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:10:10.059 00:14:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:10:10.059 00:14:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:10:10.059 00:14:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:10:10.059 00:14:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 0 00:10:10.059 00:14:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # nvme_status_sct=0x0 00:10:10.059 00:14:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@71 -- # rm -f /tmp/err_inj_VOfVc.txt 00:10:10.059 00:14:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@73 -- # killprocess 81227 00:10:10.059 00:14:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@946 -- # '[' -z 81227 ']' 00:10:10.059 00:14:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@950 -- # kill -0 81227 00:10:10.059 00:14:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@951 -- # uname 00:10:10.059 00:14:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:10:10.059 00:14:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 81227 00:10:10.059 killing process with pid 81227 00:10:10.059 00:14:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:10:10.059 00:14:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:10:10.059 00:14:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@964 -- # echo 'killing process with pid 81227' 00:10:10.059 00:14:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@965 -- # kill 81227 00:10:10.059 00:14:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@970 -- # wait 81227 00:10:10.059 00:14:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@75 -- # (( err_injection_sc != nvme_status_sc || err_injection_sct != nvme_status_sct )) 00:10:10.059 00:14:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@79 -- # (( diff_time > test_timeout )) 00:10:10.059 ************************************ 00:10:10.059 END TEST bdev_nvme_reset_stuck_adm_cmd 00:10:10.059 ************************************ 00:10:10.059 00:10:10.059 real 0m3.780s 00:10:10.059 user 0m12.941s 00:10:10.059 sys 0m0.714s 00:10:10.059 00:14:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1122 -- # xtrace_disable 00:10:10.059 00:14:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:10:10.318 00:14:24 nvme -- nvme/nvme.sh@107 -- # [[ y == y ]] 00:10:10.318 00:14:24 nvme -- nvme/nvme.sh@108 -- # run_test nvme_fio nvme_fio_test 00:10:10.318 00:14:24 nvme -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:10:10.318 00:14:24 nvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:10:10.318 00:14:24 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:10.318 ************************************ 00:10:10.318 START TEST nvme_fio 00:10:10.318 ************************************ 00:10:10.318 00:14:24 nvme.nvme_fio -- common/autotest_common.sh@1121 -- # nvme_fio_test 00:10:10.318 00:14:24 nvme.nvme_fio -- nvme/nvme.sh@31 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:10:10.318 00:14:24 nvme.nvme_fio -- nvme/nvme.sh@32 -- # ran_fio=false 00:10:10.318 00:14:24 nvme.nvme_fio -- nvme/nvme.sh@33 -- # get_nvme_bdfs 00:10:10.318 00:14:24 nvme.nvme_fio -- common/autotest_common.sh@1509 -- # bdfs=() 00:10:10.318 00:14:24 nvme.nvme_fio -- common/autotest_common.sh@1509 -- # local bdfs 00:10:10.318 00:14:24 nvme.nvme_fio -- common/autotest_common.sh@1510 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:10:10.318 00:14:24 nvme.nvme_fio -- common/autotest_common.sh@1510 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:10:10.318 00:14:24 nvme.nvme_fio -- common/autotest_common.sh@1510 -- # jq -r '.config[].params.traddr' 00:10:10.318 00:14:24 nvme.nvme_fio -- common/autotest_common.sh@1511 -- # (( 4 == 0 )) 00:10:10.318 00:14:24 nvme.nvme_fio -- common/autotest_common.sh@1515 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:10:10.318 00:14:24 nvme.nvme_fio -- nvme/nvme.sh@33 -- # bdfs=('0000:00:10.0' '0000:00:11.0' '0000:00:12.0' '0000:00:13.0') 00:10:10.318 00:14:24 nvme.nvme_fio -- nvme/nvme.sh@33 -- # local bdfs bdf 00:10:10.318 00:14:24 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:10:10.318 00:14:24 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:10:10.318 00:14:24 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:10:10.577 00:14:25 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:10:10.577 00:14:25 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:10:10.836 00:14:25 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:10:10.836 00:14:25 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:10:10.836 00:14:25 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:10:10.836 00:14:25 nvme.nvme_fio -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:10:10.836 00:14:25 nvme.nvme_fio -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:10:10.836 00:14:25 nvme.nvme_fio -- common/autotest_common.sh@1335 -- # local sanitizers 00:10:10.836 00:14:25 nvme.nvme_fio -- common/autotest_common.sh@1336 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:10:10.836 00:14:25 nvme.nvme_fio -- common/autotest_common.sh@1337 -- # shift 00:10:10.836 00:14:25 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local asan_lib= 00:10:10.836 00:14:25 nvme.nvme_fio -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:10:10.836 00:14:25 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # grep libasan 00:10:10.836 00:14:25 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:10:10.836 00:14:25 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:10:10.836 00:14:25 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # asan_lib=/usr/lib64/libasan.so.8 00:10:10.836 00:14:25 nvme.nvme_fio -- common/autotest_common.sh@1342 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:10:10.836 00:14:25 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # break 00:10:10.836 00:14:25 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:10:10.836 00:14:25 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:10:11.095 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:10:11.095 fio-3.35 00:10:11.095 Starting 1 thread 00:10:15.284 00:10:15.284 test: (groupid=0, jobs=1): err= 0: pid=81380: Tue Jul 23 00:14:29 2024 00:10:15.284 read: IOPS=23.0k, BW=90.0MiB/s (94.3MB/s)(180MiB/2001msec) 00:10:15.284 slat (nsec): min=3784, max=62225, avg=4483.63, stdev=993.21 00:10:15.284 clat (usec): min=208, max=11497, avg=2775.03, stdev=283.14 00:10:15.284 lat (usec): min=213, max=11560, avg=2779.51, stdev=283.56 00:10:15.284 clat percentiles (usec): 00:10:15.284 | 1.00th=[ 2507], 5.00th=[ 2573], 10.00th=[ 2606], 20.00th=[ 2671], 00:10:15.284 | 30.00th=[ 2704], 40.00th=[ 2737], 50.00th=[ 2737], 60.00th=[ 2769], 00:10:15.284 | 70.00th=[ 2802], 80.00th=[ 2868], 90.00th=[ 2933], 95.00th=[ 2966], 00:10:15.284 | 99.00th=[ 3294], 99.50th=[ 4359], 99.90th=[ 5932], 99.95th=[ 8717], 00:10:15.284 | 99.99th=[11076] 00:10:15.284 bw ( KiB/s): min=87104, max=93224, per=98.71%, avg=90930.67, stdev=3335.71, samples=3 00:10:15.284 iops : min=21776, max=23306, avg=22732.67, stdev=833.93, samples=3 00:10:15.284 write: IOPS=22.9k, BW=89.4MiB/s (93.8MB/s)(179MiB/2001msec); 0 zone resets 00:10:15.284 slat (nsec): min=3849, max=30228, avg=4608.70, stdev=1000.21 00:10:15.284 clat (usec): min=233, max=11285, avg=2780.70, stdev=294.16 00:10:15.284 lat (usec): min=238, max=11299, avg=2785.31, stdev=294.57 00:10:15.284 clat percentiles (usec): 00:10:15.284 | 1.00th=[ 2507], 5.00th=[ 2606], 10.00th=[ 2638], 20.00th=[ 2671], 00:10:15.284 | 30.00th=[ 2704], 40.00th=[ 2737], 50.00th=[ 2737], 60.00th=[ 2769], 00:10:15.284 | 70.00th=[ 2802], 80.00th=[ 2868], 90.00th=[ 2933], 95.00th=[ 2966], 00:10:15.284 | 99.00th=[ 3392], 99.50th=[ 4424], 99.90th=[ 7111], 99.95th=[ 9241], 00:10:15.284 | 99.99th=[10945] 00:10:15.284 bw ( KiB/s): min=86840, max=93904, per=99.48%, avg=91106.67, stdev=3754.23, samples=3 00:10:15.284 iops : min=21710, max=23476, avg=22776.67, stdev=938.56, samples=3 00:10:15.284 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.02% 00:10:15.284 lat (msec) : 2=0.06%, 4=99.26%, 10=0.61%, 20=0.03% 00:10:15.284 cpu : usr=99.50%, sys=0.05%, ctx=2, majf=0, minf=626 00:10:15.284 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:10:15.284 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:15.284 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:15.284 issued rwts: total=46082,45814,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:15.284 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:15.284 00:10:15.284 Run status group 0 (all jobs): 00:10:15.284 READ: bw=90.0MiB/s (94.3MB/s), 90.0MiB/s-90.0MiB/s (94.3MB/s-94.3MB/s), io=180MiB (189MB), run=2001-2001msec 00:10:15.284 WRITE: bw=89.4MiB/s (93.8MB/s), 89.4MiB/s-89.4MiB/s (93.8MB/s-93.8MB/s), io=179MiB (188MB), run=2001-2001msec 00:10:15.284 ----------------------------------------------------- 00:10:15.284 Suppressions used: 00:10:15.284 count bytes template 00:10:15.284 1 32 /usr/src/fio/parse.c 00:10:15.284 1 8 libtcmalloc_minimal.so 00:10:15.284 ----------------------------------------------------- 00:10:15.284 00:10:15.284 00:14:29 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:10:15.284 00:14:29 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:10:15.284 00:14:29 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:10:15.284 00:14:29 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:10:15.284 00:14:29 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:10:15.284 00:14:29 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:10:15.284 00:14:29 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:10:15.284 00:14:29 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:10:15.284 00:14:29 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:10:15.284 00:14:29 nvme.nvme_fio -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:10:15.284 00:14:29 nvme.nvme_fio -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:10:15.284 00:14:29 nvme.nvme_fio -- common/autotest_common.sh@1335 -- # local sanitizers 00:10:15.284 00:14:29 nvme.nvme_fio -- common/autotest_common.sh@1336 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:10:15.284 00:14:29 nvme.nvme_fio -- common/autotest_common.sh@1337 -- # shift 00:10:15.284 00:14:29 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local asan_lib= 00:10:15.284 00:14:29 nvme.nvme_fio -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:10:15.284 00:14:29 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:10:15.285 00:14:29 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # grep libasan 00:10:15.285 00:14:29 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:10:15.285 00:14:29 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # asan_lib=/usr/lib64/libasan.so.8 00:10:15.285 00:14:29 nvme.nvme_fio -- common/autotest_common.sh@1342 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:10:15.285 00:14:29 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # break 00:10:15.285 00:14:29 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:10:15.285 00:14:29 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:10:15.543 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:10:15.543 fio-3.35 00:10:15.543 Starting 1 thread 00:10:19.744 00:10:19.744 test: (groupid=0, jobs=1): err= 0: pid=81441: Tue Jul 23 00:14:33 2024 00:10:19.744 read: IOPS=20.8k, BW=81.3MiB/s (85.2MB/s)(163MiB/2001msec) 00:10:19.744 slat (nsec): min=3840, max=56468, avg=4496.18, stdev=1030.00 00:10:19.744 clat (usec): min=214, max=14303, avg=3065.31, stdev=438.16 00:10:19.744 lat (usec): min=219, max=14360, avg=3069.81, stdev=438.61 00:10:19.744 clat percentiles (usec): 00:10:19.744 | 1.00th=[ 2147], 5.00th=[ 2769], 10.00th=[ 2868], 20.00th=[ 2933], 00:10:19.744 | 30.00th=[ 2999], 40.00th=[ 3032], 50.00th=[ 3032], 60.00th=[ 3064], 00:10:19.744 | 70.00th=[ 3097], 80.00th=[ 3130], 90.00th=[ 3195], 95.00th=[ 3261], 00:10:19.744 | 99.00th=[ 4621], 99.50th=[ 5735], 99.90th=[ 8094], 99.95th=[11338], 00:10:19.744 | 99.99th=[13829] 00:10:19.744 bw ( KiB/s): min=79473, max=85216, per=99.69%, avg=82989.67, stdev=3081.27, samples=3 00:10:19.744 iops : min=19868, max=21304, avg=20747.33, stdev=770.46, samples=3 00:10:19.744 write: IOPS=20.7k, BW=81.0MiB/s (84.9MB/s)(162MiB/2001msec); 0 zone resets 00:10:19.744 slat (nsec): min=3926, max=51756, avg=4638.66, stdev=1035.90 00:10:19.744 clat (usec): min=230, max=14073, avg=3077.92, stdev=457.89 00:10:19.744 lat (usec): min=235, max=14087, avg=3082.56, stdev=458.40 00:10:19.744 clat percentiles (usec): 00:10:19.744 | 1.00th=[ 2089], 5.00th=[ 2769], 10.00th=[ 2900], 20.00th=[ 2966], 00:10:19.744 | 30.00th=[ 2999], 40.00th=[ 3032], 50.00th=[ 3064], 60.00th=[ 3097], 00:10:19.744 | 70.00th=[ 3130], 80.00th=[ 3163], 90.00th=[ 3195], 95.00th=[ 3261], 00:10:19.744 | 99.00th=[ 4817], 99.50th=[ 5866], 99.90th=[ 8717], 99.95th=[11731], 00:10:19.744 | 99.99th=[13566] 00:10:19.744 bw ( KiB/s): min=79377, max=85456, per=100.00%, avg=83037.67, stdev=3224.30, samples=3 00:10:19.744 iops : min=19844, max=21364, avg=20759.33, stdev=806.22, samples=3 00:10:19.744 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.03% 00:10:19.744 lat (msec) : 2=0.77%, 4=97.41%, 10=1.69%, 20=0.07% 00:10:19.744 cpu : usr=99.40%, sys=0.05%, ctx=6, majf=0, minf=626 00:10:19.744 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:10:19.744 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:19.744 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:19.744 issued rwts: total=41645,41477,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:19.744 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:19.744 00:10:19.744 Run status group 0 (all jobs): 00:10:19.744 READ: bw=81.3MiB/s (85.2MB/s), 81.3MiB/s-81.3MiB/s (85.2MB/s-85.2MB/s), io=163MiB (171MB), run=2001-2001msec 00:10:19.744 WRITE: bw=81.0MiB/s (84.9MB/s), 81.0MiB/s-81.0MiB/s (84.9MB/s-84.9MB/s), io=162MiB (170MB), run=2001-2001msec 00:10:19.744 ----------------------------------------------------- 00:10:19.744 Suppressions used: 00:10:19.744 count bytes template 00:10:19.744 1 32 /usr/src/fio/parse.c 00:10:19.744 1 8 libtcmalloc_minimal.so 00:10:19.744 ----------------------------------------------------- 00:10:19.744 00:10:19.744 00:14:33 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:10:19.744 00:14:33 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:10:19.744 00:14:33 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:10:19.744 00:14:33 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:10:19.744 00:14:34 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:10:19.744 00:14:34 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:10:19.744 00:14:34 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:10:19.744 00:14:34 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:10:19.744 00:14:34 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:10:19.744 00:14:34 nvme.nvme_fio -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:10:19.744 00:14:34 nvme.nvme_fio -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:10:19.744 00:14:34 nvme.nvme_fio -- common/autotest_common.sh@1335 -- # local sanitizers 00:10:19.744 00:14:34 nvme.nvme_fio -- common/autotest_common.sh@1336 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:10:19.744 00:14:34 nvme.nvme_fio -- common/autotest_common.sh@1337 -- # shift 00:10:19.744 00:14:34 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local asan_lib= 00:10:19.744 00:14:34 nvme.nvme_fio -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:10:19.744 00:14:34 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:10:19.744 00:14:34 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # grep libasan 00:10:19.744 00:14:34 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:10:19.744 00:14:34 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # asan_lib=/usr/lib64/libasan.so.8 00:10:19.744 00:14:34 nvme.nvme_fio -- common/autotest_common.sh@1342 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:10:19.744 00:14:34 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # break 00:10:19.744 00:14:34 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:10:19.744 00:14:34 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:10:20.003 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:10:20.004 fio-3.35 00:10:20.004 Starting 1 thread 00:10:24.194 00:10:24.194 test: (groupid=0, jobs=1): err= 0: pid=81498: Tue Jul 23 00:14:38 2024 00:10:24.194 read: IOPS=22.3k, BW=87.1MiB/s (91.3MB/s)(174MiB/2001msec) 00:10:24.194 slat (usec): min=3, max=270, avg= 4.58, stdev= 2.23 00:10:24.194 clat (usec): min=183, max=14464, avg=2867.42, stdev=358.75 00:10:24.194 lat (usec): min=187, max=14519, avg=2872.00, stdev=359.19 00:10:24.194 clat percentiles (usec): 00:10:24.194 | 1.00th=[ 2540], 5.00th=[ 2638], 10.00th=[ 2704], 20.00th=[ 2737], 00:10:24.194 | 30.00th=[ 2802], 40.00th=[ 2802], 50.00th=[ 2835], 60.00th=[ 2868], 00:10:24.194 | 70.00th=[ 2900], 80.00th=[ 2933], 90.00th=[ 2999], 95.00th=[ 3064], 00:10:24.194 | 99.00th=[ 3458], 99.50th=[ 4752], 99.90th=[ 7635], 99.95th=[10814], 00:10:24.194 | 99.99th=[13435] 00:10:24.194 bw ( KiB/s): min=86576, max=89792, per=99.15%, avg=88405.33, stdev=1653.07, samples=3 00:10:24.194 iops : min=21644, max=22448, avg=22101.33, stdev=413.27, samples=3 00:10:24.194 write: IOPS=22.1k, BW=86.5MiB/s (90.7MB/s)(173MiB/2001msec); 0 zone resets 00:10:24.194 slat (usec): min=3, max=624, avg= 4.69, stdev= 3.62 00:10:24.194 clat (usec): min=218, max=13589, avg=2874.62, stdev=370.89 00:10:24.194 lat (usec): min=223, max=13603, avg=2879.31, stdev=371.54 00:10:24.194 clat percentiles (usec): 00:10:24.194 | 1.00th=[ 2573], 5.00th=[ 2638], 10.00th=[ 2704], 20.00th=[ 2769], 00:10:24.194 | 30.00th=[ 2802], 40.00th=[ 2835], 50.00th=[ 2835], 60.00th=[ 2868], 00:10:24.194 | 70.00th=[ 2900], 80.00th=[ 2933], 90.00th=[ 2999], 95.00th=[ 3064], 00:10:24.194 | 99.00th=[ 3490], 99.50th=[ 4948], 99.90th=[ 7701], 99.95th=[11076], 00:10:24.194 | 99.99th=[13173] 00:10:24.194 bw ( KiB/s): min=86184, max=89744, per=99.96%, avg=88525.33, stdev=2028.22, samples=3 00:10:24.194 iops : min=21546, max=22436, avg=22131.33, stdev=507.06, samples=3 00:10:24.194 lat (usec) : 250=0.01%, 500=0.01%, 750=0.02%, 1000=0.01% 00:10:24.194 lat (msec) : 2=0.11%, 4=99.26%, 10=0.53%, 20=0.07% 00:10:24.194 cpu : usr=98.55%, sys=0.45%, ctx=17, majf=0, minf=627 00:10:24.194 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:10:24.194 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:24.194 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:24.194 issued rwts: total=44603,44302,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:24.194 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:24.194 00:10:24.194 Run status group 0 (all jobs): 00:10:24.194 READ: bw=87.1MiB/s (91.3MB/s), 87.1MiB/s-87.1MiB/s (91.3MB/s-91.3MB/s), io=174MiB (183MB), run=2001-2001msec 00:10:24.194 WRITE: bw=86.5MiB/s (90.7MB/s), 86.5MiB/s-86.5MiB/s (90.7MB/s-90.7MB/s), io=173MiB (181MB), run=2001-2001msec 00:10:24.194 ----------------------------------------------------- 00:10:24.194 Suppressions used: 00:10:24.194 count bytes template 00:10:24.194 1 32 /usr/src/fio/parse.c 00:10:24.194 1 8 libtcmalloc_minimal.so 00:10:24.194 ----------------------------------------------------- 00:10:24.194 00:10:24.194 00:14:38 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:10:24.194 00:14:38 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:10:24.194 00:14:38 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:10:24.194 00:14:38 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:10:24.194 00:14:38 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:10:24.194 00:14:38 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:10:24.454 00:14:38 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:10:24.454 00:14:38 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:10:24.454 00:14:38 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:10:24.454 00:14:38 nvme.nvme_fio -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:10:24.454 00:14:38 nvme.nvme_fio -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:10:24.454 00:14:38 nvme.nvme_fio -- common/autotest_common.sh@1335 -- # local sanitizers 00:10:24.454 00:14:38 nvme.nvme_fio -- common/autotest_common.sh@1336 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:10:24.454 00:14:38 nvme.nvme_fio -- common/autotest_common.sh@1337 -- # shift 00:10:24.454 00:14:38 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local asan_lib= 00:10:24.454 00:14:38 nvme.nvme_fio -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:10:24.454 00:14:38 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:10:24.454 00:14:38 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # grep libasan 00:10:24.454 00:14:38 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:10:24.454 00:14:38 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # asan_lib=/usr/lib64/libasan.so.8 00:10:24.454 00:14:38 nvme.nvme_fio -- common/autotest_common.sh@1342 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:10:24.454 00:14:38 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # break 00:10:24.454 00:14:38 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:10:24.454 00:14:38 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:10:24.713 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:10:24.713 fio-3.35 00:10:24.713 Starting 1 thread 00:10:28.902 00:10:28.902 test: (groupid=0, jobs=1): err= 0: pid=81558: Tue Jul 23 00:14:42 2024 00:10:28.902 read: IOPS=23.7k, BW=92.5MiB/s (97.0MB/s)(185MiB/2001msec) 00:10:28.902 slat (usec): min=3, max=3271, avg= 4.55, stdev=15.05 00:10:28.902 clat (usec): min=226, max=12526, avg=2699.02, stdev=476.34 00:10:28.902 lat (usec): min=230, max=12593, avg=2703.57, stdev=477.21 00:10:28.902 clat percentiles (usec): 00:10:28.902 | 1.00th=[ 2024], 5.00th=[ 2474], 10.00th=[ 2540], 20.00th=[ 2573], 00:10:28.902 | 30.00th=[ 2606], 40.00th=[ 2638], 50.00th=[ 2638], 60.00th=[ 2671], 00:10:28.902 | 70.00th=[ 2704], 80.00th=[ 2737], 90.00th=[ 2802], 95.00th=[ 2868], 00:10:28.902 | 99.00th=[ 5211], 99.50th=[ 6259], 99.90th=[ 7767], 99.95th=[ 8979], 00:10:28.902 | 99.99th=[12125] 00:10:28.902 bw ( KiB/s): min=91256, max=95432, per=98.53%, avg=93314.67, stdev=2088.62, samples=3 00:10:28.902 iops : min=22814, max=23858, avg=23328.67, stdev=522.15, samples=3 00:10:28.902 write: IOPS=23.5k, BW=91.9MiB/s (96.4MB/s)(184MiB/2001msec); 0 zone resets 00:10:28.902 slat (nsec): min=3885, max=76539, avg=4603.56, stdev=1216.58 00:10:28.902 clat (usec): min=191, max=12358, avg=2705.45, stdev=479.97 00:10:28.902 lat (usec): min=195, max=12372, avg=2710.06, stdev=480.59 00:10:28.902 clat percentiles (usec): 00:10:28.902 | 1.00th=[ 2040], 5.00th=[ 2474], 10.00th=[ 2540], 20.00th=[ 2573], 00:10:28.902 | 30.00th=[ 2606], 40.00th=[ 2638], 50.00th=[ 2638], 60.00th=[ 2671], 00:10:28.902 | 70.00th=[ 2704], 80.00th=[ 2737], 90.00th=[ 2802], 95.00th=[ 2868], 00:10:28.902 | 99.00th=[ 5276], 99.50th=[ 6259], 99.90th=[ 7767], 99.95th=[ 9372], 00:10:28.902 | 99.99th=[11731] 00:10:28.902 bw ( KiB/s): min=90744, max=97024, per=99.22%, avg=93392.00, stdev=3253.58, samples=3 00:10:28.902 iops : min=22686, max=24256, avg=23348.00, stdev=813.40, samples=3 00:10:28.902 lat (usec) : 250=0.01%, 500=0.01%, 750=0.02%, 1000=0.04% 00:10:28.902 lat (msec) : 2=0.88%, 4=97.52%, 10=1.49%, 20=0.04% 00:10:28.902 cpu : usr=99.25%, sys=0.00%, ctx=4, majf=0, minf=624 00:10:28.902 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:10:28.902 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:28.902 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:28.902 issued rwts: total=47376,47087,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:28.902 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:28.902 00:10:28.902 Run status group 0 (all jobs): 00:10:28.902 READ: bw=92.5MiB/s (97.0MB/s), 92.5MiB/s-92.5MiB/s (97.0MB/s-97.0MB/s), io=185MiB (194MB), run=2001-2001msec 00:10:28.902 WRITE: bw=91.9MiB/s (96.4MB/s), 91.9MiB/s-91.9MiB/s (96.4MB/s-96.4MB/s), io=184MiB (193MB), run=2001-2001msec 00:10:28.902 ----------------------------------------------------- 00:10:28.902 Suppressions used: 00:10:28.902 count bytes template 00:10:28.902 1 32 /usr/src/fio/parse.c 00:10:28.902 1 8 libtcmalloc_minimal.so 00:10:28.902 ----------------------------------------------------- 00:10:28.902 00:10:28.902 00:14:43 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:10:28.902 00:14:43 nvme.nvme_fio -- nvme/nvme.sh@46 -- # true 00:10:28.902 00:10:28.902 real 0m18.263s 00:10:28.902 user 0m14.186s 00:10:28.902 sys 0m3.533s 00:10:28.902 00:14:43 nvme.nvme_fio -- common/autotest_common.sh@1122 -- # xtrace_disable 00:10:28.902 00:14:43 nvme.nvme_fio -- common/autotest_common.sh@10 -- # set +x 00:10:28.902 ************************************ 00:10:28.902 END TEST nvme_fio 00:10:28.902 ************************************ 00:10:28.902 00:10:28.902 real 1m28.918s 00:10:28.902 user 3m29.838s 00:10:28.902 sys 0m20.725s 00:10:28.902 00:14:43 nvme -- common/autotest_common.sh@1122 -- # xtrace_disable 00:10:28.902 00:14:43 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:28.902 ************************************ 00:10:28.902 END TEST nvme 00:10:28.902 ************************************ 00:10:28.902 00:14:43 -- spdk/autotest.sh@217 -- # [[ 0 -eq 1 ]] 00:10:28.902 00:14:43 -- spdk/autotest.sh@221 -- # run_test nvme_scc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:10:28.902 00:14:43 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:10:28.902 00:14:43 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:10:28.902 00:14:43 -- common/autotest_common.sh@10 -- # set +x 00:10:28.902 ************************************ 00:10:28.902 START TEST nvme_scc 00:10:28.902 ************************************ 00:10:28.902 00:14:43 nvme_scc -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:10:28.902 * Looking for test storage... 00:10:28.902 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:10:28.902 00:14:43 nvme_scc -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:10:28.902 00:14:43 nvme_scc -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:10:28.902 00:14:43 nvme_scc -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:10:28.902 00:14:43 nvme_scc -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:10:28.902 00:14:43 nvme_scc -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:28.902 00:14:43 nvme_scc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:28.902 00:14:43 nvme_scc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:28.902 00:14:43 nvme_scc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:28.902 00:14:43 nvme_scc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:28.902 00:14:43 nvme_scc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:28.902 00:14:43 nvme_scc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:28.902 00:14:43 nvme_scc -- paths/export.sh@5 -- # export PATH 00:10:28.902 00:14:43 nvme_scc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:28.902 00:14:43 nvme_scc -- nvme/functions.sh@10 -- # ctrls=() 00:10:28.902 00:14:43 nvme_scc -- nvme/functions.sh@10 -- # declare -A ctrls 00:10:28.902 00:14:43 nvme_scc -- nvme/functions.sh@11 -- # nvmes=() 00:10:28.902 00:14:43 nvme_scc -- nvme/functions.sh@11 -- # declare -A nvmes 00:10:28.902 00:14:43 nvme_scc -- nvme/functions.sh@12 -- # bdfs=() 00:10:28.902 00:14:43 nvme_scc -- nvme/functions.sh@12 -- # declare -A bdfs 00:10:28.902 00:14:43 nvme_scc -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:10:28.902 00:14:43 nvme_scc -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:10:28.902 00:14:43 nvme_scc -- nvme/functions.sh@14 -- # nvme_name= 00:10:28.902 00:14:43 nvme_scc -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:28.902 00:14:43 nvme_scc -- nvme/nvme_scc.sh@12 -- # uname 00:10:28.902 00:14:43 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ Linux == Linux ]] 00:10:28.902 00:14:43 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ QEMU == QEMU ]] 00:10:28.902 00:14:43 nvme_scc -- nvme/nvme_scc.sh@14 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:10:29.189 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:29.756 Waiting for block devices as requested 00:10:29.756 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:10:29.756 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:10:29.756 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:10:30.014 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:10:35.295 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:10:35.295 00:14:49 nvme_scc -- nvme/nvme_scc.sh@16 -- # scan_nvme_ctrls 00:10:35.295 00:14:49 nvme_scc -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:10:35.295 00:14:49 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:10:35.295 00:14:49 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:10:35.295 00:14:49 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:10:35.295 00:14:49 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:10:35.295 00:14:49 nvme_scc -- scripts/common.sh@15 -- # local i 00:10:35.295 00:14:49 nvme_scc -- scripts/common.sh@18 -- # [[ =~ 0000:00:11.0 ]] 00:10:35.295 00:14:49 nvme_scc -- scripts/common.sh@22 -- # [[ -z '' ]] 00:10:35.295 00:14:49 nvme_scc -- scripts/common.sh@24 -- # return 0 00:10:35.295 00:14:49 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:10:35.295 00:14:49 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:10:35.295 00:14:49 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:10:35.295 00:14:49 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:35.295 00:14:49 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:10:35.295 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.295 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.295 00:14:49 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:10:35.295 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:35.295 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.295 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.295 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:10:35.295 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:10:35.295 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:10:35.295 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.295 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.295 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:10:35.295 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:10:35.295 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:10:35.295 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.295 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.295 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:10:35.295 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:10:35.295 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:10:35.295 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.295 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.295 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:10:35.295 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:10:35.295 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:10:35.295 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.295 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.295 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:10:35.295 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:10:35.295 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:10:35.295 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.295 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.295 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:10:35.295 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:10:35.295 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:10:35.295 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.295 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.295 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:10:35.295 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:10:35.295 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:10:35.295 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.296 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.296 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.296 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:10:35.296 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:10:35.296 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.296 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.296 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:35.296 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:10:35.296 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:10:35.296 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.296 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.296 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.296 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:10:35.296 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:10:35.296 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.296 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.296 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:10:35.296 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:10:35.296 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:10:35.296 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.296 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.296 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.296 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:10:35.296 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:10:35.296 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.296 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.296 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.296 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:10:35.296 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:10:35.296 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.296 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.296 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:10:35.296 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:10:35.296 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:10:35.296 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.296 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.296 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:10:35.296 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:10:35.296 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:10:35.296 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.296 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.296 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.296 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:10:35.296 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:10:35.296 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.296 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.296 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:35.296 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:10:35.296 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:10:35.296 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.296 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.296 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:10:35.296 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:10:35.296 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:10:35.296 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.296 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.296 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.296 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:10:35.296 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:10:35.296 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.296 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.296 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.296 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:10:35.296 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:10:35.296 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.296 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.296 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.296 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:10:35.296 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:10:35.296 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.296 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.296 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.296 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:10:35.296 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:10:35.296 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.296 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.296 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.296 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:10:35.296 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:10:35.296 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.296 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.296 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.296 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:10:35.296 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:10:35.296 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.296 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.296 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:10:35.296 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:10:35.296 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:10:35.296 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.296 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.296 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:35.296 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:10:35.296 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:10:35.296 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.296 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.296 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:35.296 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:10:35.296 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:10:35.296 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.296 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.296 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:35.296 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:10:35.296 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:10:35.296 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.296 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.296 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:35.296 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:10:35.296 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:10:35.296 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.296 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.296 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.296 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:10:35.296 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:10:35.296 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.296 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.296 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.296 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:10:35.296 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:10:35.296 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.296 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.296 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.296 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:10:35.296 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:10:35.296 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.296 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.296 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.296 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:10:35.296 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:10:35.296 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.296 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.296 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:10:35.296 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:10:35.296 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:10:35.296 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.296 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.296 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:10:35.296 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:10:35.296 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:10:35.296 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.296 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.296 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.296 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:10:35.296 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:10:35.296 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.296 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.296 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.296 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:10:35.296 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:10:35.296 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.296 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.296 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.296 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:10:35.297 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:10:35.297 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.297 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.297 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.297 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:10:35.297 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:10:35.297 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.297 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.297 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.297 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:10:35.297 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:10:35.297 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.297 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.297 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.297 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:10:35.297 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:10:35.297 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.297 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.297 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.297 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:10:35.297 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:10:35.297 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.297 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.297 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.297 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:10:35.297 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:10:35.297 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.297 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.297 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.297 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:10:35.297 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:10:35.297 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.297 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.297 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.297 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:10:35.297 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:10:35.297 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.297 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.297 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.297 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:10:35.297 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:10:35.297 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.297 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.297 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.297 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:10:35.297 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:10:35.297 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.297 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.297 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.297 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:10:35.297 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:10:35.297 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.297 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.297 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.297 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:10:35.297 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:10:35.297 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.297 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.297 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.297 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:10:35.297 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:10:35.297 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.297 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.297 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.297 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:10:35.297 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:10:35.297 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.297 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.297 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.297 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:10:35.297 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:10:35.297 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.297 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.297 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.297 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:10:35.297 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:10:35.297 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.297 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.297 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.297 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:10:35.297 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:10:35.297 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.297 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.297 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.297 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:10:35.297 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:10:35.297 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.297 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.297 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.297 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:10:35.297 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:10:35.297 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.297 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.297 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.297 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:10:35.297 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:10:35.297 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.297 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.297 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.297 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:10:35.297 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:10:35.297 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.297 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.297 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.297 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:10:35.297 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:10:35.297 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.297 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.297 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.297 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:10:35.297 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:10:35.297 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.297 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.297 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:10:35.297 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:10:35.297 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:10:35.297 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.297 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.297 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:10:35.297 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:10:35.297 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:10:35.297 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.297 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.297 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.297 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:10:35.297 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:10:35.297 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.297 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.297 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:10:35.297 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:10:35.297 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:10:35.297 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.297 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.297 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:10:35.297 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:10:35.297 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:10:35.297 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.297 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.297 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.297 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:10:35.297 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:10:35.297 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.297 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.297 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.297 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:10:35.297 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:10:35.297 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.297 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.297 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:35.297 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:10:35.297 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:10:35.297 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.297 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.297 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.297 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:10:35.297 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:10:35.298 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.298 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.298 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.298 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:10:35.298 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:10:35.298 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.298 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.298 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.298 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:10:35.298 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:10:35.298 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.298 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.298 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.298 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:10:35.298 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:10:35.298 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.298 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.298 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.298 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:10:35.298 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:10:35.298 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.298 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.298 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:35.298 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:10:35.298 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:10:35.298 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.298 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.298 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:10:35.298 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:10:35.298 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:10:35.298 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.298 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.298 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.298 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:10:35.298 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:10:35.298 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.298 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.298 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.298 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:10:35.298 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:10:35.298 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.298 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.298 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.298 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:10:35.298 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:10:35.298 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.298 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.298 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:10:35.298 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:10:35.298 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:10:35.298 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.298 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.298 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.298 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:10:35.298 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:10:35.298 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.298 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.298 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.298 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:10:35.298 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:10:35.298 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.298 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.298 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.298 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:10:35.298 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:10:35.298 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.298 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.298 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.298 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:10:35.298 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:10:35.298 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.298 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.298 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.298 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:10:35.298 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:10:35.298 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.298 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.298 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.298 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:10:35.298 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:10:35.298 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.298 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.298 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:10:35.298 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:10:35.298 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:10:35.298 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.298 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.298 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:10:35.298 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:10:35.298 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:10:35.298 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.298 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.298 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:10:35.298 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:10:35.298 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:10:35.298 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.298 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.298 00:14:49 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:10:35.298 00:14:49 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:10:35.298 00:14:49 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:10:35.298 00:14:49 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:10:35.298 00:14:49 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:10:35.298 00:14:49 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:10:35.298 00:14:49 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:35.298 00:14:49 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:10:35.298 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.298 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.298 00:14:49 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:10:35.298 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:35.298 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.298 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.298 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:10:35.298 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:10:35.298 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:10:35.298 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.298 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.298 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:10:35.298 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:10:35.298 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:10:35.298 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.298 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.298 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:10:35.298 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:10:35.298 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:10:35.298 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.298 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.298 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:35.298 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:10:35.298 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:10:35.298 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.298 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.298 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:35.298 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:10:35.298 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:10:35.298 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.298 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.298 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:35.298 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:10:35.298 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:10:35.298 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.298 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.298 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:35.298 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:10:35.298 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:10:35.298 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.298 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.298 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:35.298 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:10:35.298 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:10:35.298 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.299 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.299 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.299 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:10:35.299 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:10:35.299 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.299 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.299 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.299 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:10:35.299 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:10:35.299 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.299 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.299 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.299 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:10:35.299 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:10:35.299 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.299 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.299 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.299 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:10:35.299 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:10:35.299 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.299 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.299 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:35.299 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:10:35.299 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:10:35.299 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.299 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.299 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.299 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:10:35.299 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:10:35.299 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.299 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.299 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.299 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:10:35.299 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:10:35.299 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.299 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.299 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.299 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:10:35.299 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:10:35.299 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.299 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.299 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.299 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:10:35.299 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:10:35.299 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.299 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.299 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.299 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:10:35.299 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:10:35.299 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.299 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.299 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.299 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:10:35.299 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:10:35.299 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.299 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.299 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.299 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:10:35.299 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:10:35.299 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.299 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.299 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.299 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:10:35.299 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:10:35.299 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.299 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.299 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.299 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:10:35.299 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:10:35.299 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.299 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.299 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.299 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:10:35.299 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:10:35.299 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.299 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.299 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.299 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:10:35.299 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:10:35.299 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.299 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.299 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.299 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:10:35.299 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:10:35.299 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.299 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.299 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.299 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:10:35.299 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:10:35.299 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.299 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.299 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:35.299 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:10:35.299 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:10:35.299 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.299 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.299 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:35.299 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:10:35.299 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:10:35.299 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.299 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.299 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:35.299 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:10:35.299 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:10:35.299 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.299 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.299 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.299 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:10:35.299 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:10:35.299 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.299 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.299 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.299 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:10:35.299 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:10:35.299 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.299 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.299 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.299 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:10:35.299 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:10:35.299 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.299 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.299 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.299 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:10:35.299 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:10:35.299 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.299 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.299 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.299 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:10:35.299 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:10:35.299 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.299 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.299 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:35.299 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:10:35.299 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:10:35.299 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.299 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.299 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:35.299 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:10:35.299 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:10:35.299 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.299 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.299 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:35.299 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:35.299 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:35.299 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.299 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.299 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:35.299 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:35.299 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:35.299 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.299 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.299 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:35.300 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:35.300 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:35.300 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.300 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.300 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:35.300 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:35.300 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:35.300 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.300 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.300 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:35.300 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:35.300 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:35.300 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.300 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.300 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:35.300 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:35.300 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:35.300 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.300 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.300 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:35.300 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:35.300 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:35.300 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.300 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.300 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:35.300 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:35.300 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:35.300 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.300 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.300 00:14:49 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:10:35.300 00:14:49 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:10:35.300 00:14:49 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:10:35.300 00:14:49 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:10:35.300 00:14:49 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:10:35.300 00:14:49 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:10:35.300 00:14:49 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:10:35.300 00:14:49 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:10:35.300 00:14:49 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:10:35.300 00:14:49 nvme_scc -- scripts/common.sh@15 -- # local i 00:10:35.300 00:14:49 nvme_scc -- scripts/common.sh@18 -- # [[ =~ 0000:00:10.0 ]] 00:10:35.300 00:14:49 nvme_scc -- scripts/common.sh@22 -- # [[ -z '' ]] 00:10:35.300 00:14:49 nvme_scc -- scripts/common.sh@24 -- # return 0 00:10:35.300 00:14:49 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:10:35.300 00:14:49 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:10:35.300 00:14:49 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:10:35.300 00:14:49 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:35.300 00:14:49 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:10:35.300 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.300 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.300 00:14:49 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:10:35.300 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:35.300 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.300 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.300 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:10:35.300 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:10:35.300 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:10:35.300 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.300 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.300 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:10:35.300 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:10:35.300 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:10:35.300 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.300 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.300 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:10:35.300 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:10:35.300 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:10:35.300 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.300 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.300 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:10:35.300 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:10:35.300 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:10:35.300 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.300 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.300 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:10:35.300 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:10:35.300 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:10:35.300 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.300 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.300 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:10:35.300 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:10:35.300 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:10:35.300 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.300 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.300 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:10:35.300 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:10:35.300 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:10:35.300 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.300 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.300 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.300 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:10:35.300 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:10:35.300 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.300 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.300 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:35.300 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:10:35.300 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:10:35.300 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.300 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.300 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.300 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:10:35.300 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:10:35.300 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.300 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.300 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:10:35.300 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:10:35.300 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:10:35.300 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.300 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.300 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.300 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:10:35.300 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:10:35.300 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.300 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.300 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.300 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:10:35.300 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:10:35.300 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.301 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.301 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:10:35.301 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:10:35.301 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:10:35.301 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.301 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.301 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:10:35.301 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:10:35.301 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:10:35.301 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.301 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.301 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.301 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:10:35.301 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:10:35.301 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.301 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.301 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:35.301 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:10:35.301 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:10:35.301 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.301 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.301 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:10:35.301 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:10:35.301 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:10:35.301 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.301 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.301 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.301 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:10:35.301 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:10:35.301 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.301 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.301 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.301 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:10:35.301 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:10:35.301 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.301 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.301 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.301 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:10:35.301 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:10:35.301 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.301 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.301 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.301 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:10:35.301 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:10:35.301 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.301 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.301 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.301 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:10:35.301 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:10:35.301 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.301 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.301 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.301 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:10:35.301 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:10:35.301 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.301 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.301 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:10:35.301 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:10:35.301 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:10:35.301 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.301 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.301 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:35.301 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:10:35.301 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:10:35.301 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.301 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.301 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:35.301 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:10:35.301 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:10:35.301 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.301 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.301 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:35.301 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:10:35.301 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:10:35.301 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.301 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.301 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:35.301 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:10:35.301 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:10:35.301 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.301 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.301 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.301 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:10:35.301 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:10:35.301 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.301 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.301 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.301 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:10:35.301 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:10:35.301 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.301 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.301 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.301 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:10:35.301 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:10:35.301 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.301 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.301 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.301 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:10:35.301 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:10:35.301 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.301 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.301 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:10:35.301 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:10:35.301 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:10:35.301 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.301 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.301 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:10:35.301 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:10:35.301 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:10:35.301 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.301 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.301 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.301 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:10:35.301 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:10:35.301 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.301 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.301 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.301 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:10:35.301 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:10:35.301 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.301 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.301 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.301 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:10:35.301 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:10:35.301 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.301 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.301 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.301 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:10:35.301 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:10:35.301 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.301 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.301 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.301 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:10:35.301 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:10:35.301 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.301 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.301 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.301 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:10:35.301 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:10:35.301 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.301 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.301 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.301 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:10:35.301 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:10:35.301 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.301 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.301 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.301 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:10:35.301 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:10:35.301 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.301 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.301 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.301 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:10:35.301 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:10:35.301 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.301 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.302 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.302 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:10:35.302 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:10:35.302 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.302 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.302 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.302 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:10:35.302 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:10:35.302 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.302 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.302 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.302 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:10:35.302 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:10:35.302 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.302 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.302 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.302 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:10:35.302 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:10:35.302 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.302 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.302 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.302 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:10:35.302 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:10:35.302 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.302 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.302 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.302 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:10:35.302 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:10:35.302 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.302 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.302 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.302 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:10:35.302 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:10:35.302 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.302 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.302 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.302 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:10:35.302 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:10:35.302 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.302 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.302 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.302 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:10:35.302 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:10:35.302 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.302 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.302 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.302 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:10:35.302 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:10:35.302 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.302 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.302 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.302 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:10:35.302 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:10:35.302 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.302 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.302 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.302 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:10:35.302 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:10:35.302 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.302 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.302 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.302 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:10:35.302 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:10:35.302 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.302 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.302 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.302 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:10:35.302 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:10:35.302 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.302 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.302 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.302 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:10:35.302 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:10:35.302 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.302 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.302 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.302 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:10:35.302 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:10:35.302 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.302 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.302 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:10:35.302 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:10:35.302 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:10:35.302 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.302 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.302 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:10:35.302 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:10:35.302 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:10:35.302 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.302 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.302 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.302 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:10:35.302 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:10:35.302 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.302 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.302 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:10:35.302 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:10:35.302 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:10:35.302 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.302 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.302 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:10:35.302 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:10:35.302 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:10:35.302 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.302 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.302 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.302 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:10:35.302 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:10:35.302 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.302 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.302 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.302 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:10:35.302 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:10:35.302 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.302 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.302 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:35.302 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:10:35.302 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:10:35.302 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.302 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.302 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.302 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:10:35.302 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:10:35.302 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.302 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.302 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.302 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:10:35.302 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:10:35.302 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.302 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.302 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.302 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:10:35.302 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:10:35.302 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.302 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.302 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.302 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:10:35.302 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:10:35.302 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.302 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.302 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.302 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:10:35.302 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:10:35.302 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.302 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.302 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:35.302 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:10:35.302 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:10:35.302 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.302 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.302 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:10:35.302 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:10:35.302 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:10:35.302 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.302 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.302 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.303 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:10:35.303 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:10:35.303 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.303 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.303 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.303 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:10:35.303 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:10:35.303 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.303 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.303 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.303 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:10:35.303 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:10:35.303 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.303 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.303 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:10:35.303 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:10:35.303 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:10:35.303 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.303 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.303 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.303 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:10:35.303 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:10:35.303 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.303 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.303 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.303 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:10:35.303 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:10:35.303 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.303 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.303 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.303 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:10:35.303 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:10:35.303 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.303 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.303 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.303 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:10:35.303 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:10:35.303 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.303 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.303 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.303 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:10:35.303 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:10:35.303 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.303 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.303 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.303 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:10:35.303 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:10:35.303 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.303 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.303 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:10:35.303 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:10:35.303 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:10:35.303 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.303 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.303 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:10:35.303 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:10:35.303 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:10:35.303 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.303 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.303 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:10:35.303 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:10:35.303 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:10:35.303 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.303 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.303 00:14:49 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:10:35.303 00:14:49 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:10:35.303 00:14:49 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:10:35.303 00:14:49 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:10:35.303 00:14:49 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:10:35.303 00:14:49 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:10:35.303 00:14:49 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:35.303 00:14:49 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:10:35.303 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.303 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.303 00:14:49 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:10:35.303 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:35.303 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.303 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.303 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:10:35.303 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:10:35.303 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:10:35.303 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.303 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.303 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:10:35.303 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:10:35.303 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:10:35.303 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.303 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.303 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:10:35.303 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:10:35.303 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:10:35.303 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.303 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.303 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:35.303 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:10:35.303 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:10:35.303 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.303 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.303 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:35.303 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:10:35.303 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:10:35.303 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.303 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.303 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:35.303 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:10:35.303 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:10:35.303 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.303 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.303 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:35.303 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:10:35.303 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:10:35.303 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.303 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.303 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:35.303 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:10:35.303 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:10:35.303 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.303 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.303 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.303 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:10:35.303 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:10:35.303 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.303 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.303 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.303 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:10:35.303 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:10:35.303 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.303 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.303 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.303 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:10:35.303 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:10:35.303 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.303 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.303 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.303 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:10:35.303 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:10:35.303 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.303 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.303 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:35.303 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:10:35.303 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:10:35.303 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.303 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.303 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.303 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:10:35.303 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:10:35.303 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.303 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.303 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.303 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:10:35.303 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:10:35.303 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.304 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.304 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.304 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:10:35.304 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:10:35.304 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.304 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.304 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.304 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:10:35.304 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:10:35.304 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.304 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.304 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.304 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:10:35.304 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:10:35.304 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.304 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.304 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.304 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:10:35.304 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:10:35.304 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.304 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.304 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.304 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:10:35.304 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:10:35.304 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.304 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.304 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.304 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:10:35.304 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:10:35.304 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.304 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.304 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.304 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:10:35.304 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:10:35.304 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.304 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.304 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.304 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:10:35.304 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:10:35.304 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.304 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.304 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.304 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:10:35.304 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:10:35.304 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.304 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.304 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.304 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:10:35.304 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:10:35.304 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.304 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.304 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.304 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:10:35.304 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:10:35.304 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.304 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.304 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:35.304 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:10:35.304 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:10:35.304 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.304 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.304 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:35.304 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:10:35.304 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:10:35.304 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.304 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.304 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:35.304 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:10:35.304 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:10:35.304 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.304 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.304 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.304 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:10:35.304 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:10:35.304 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.304 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.304 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.304 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:10:35.304 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:10:35.304 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.304 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.304 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.304 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:10:35.304 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:10:35.304 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.304 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.304 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.304 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:10:35.304 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:10:35.304 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.304 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.304 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.304 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:10:35.304 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:10:35.304 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.304 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.304 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:35.304 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:10:35.304 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:10:35.304 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.304 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.304 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:35.304 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:10:35.304 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:10:35.304 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.304 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.304 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:35.304 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:35.304 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:35.304 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.304 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.304 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:35.304 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:35.304 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:35.304 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.304 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.304 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:35.304 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:35.304 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:35.304 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.304 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.304 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:35.304 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:35.304 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:35.304 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.304 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.304 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:10:35.304 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:10:35.304 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:10:35.304 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.304 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.304 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:35.304 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:35.304 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:35.304 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.304 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.304 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:35.304 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:35.304 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:35.304 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.304 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.304 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:10:35.304 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:10:35.304 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:10:35.304 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.304 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.304 00:14:49 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:10:35.304 00:14:49 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:10:35.304 00:14:49 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:10:35.305 00:14:49 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:10:35.305 00:14:49 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:10:35.305 00:14:49 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:10:35.305 00:14:49 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:10:35.305 00:14:49 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:10:35.305 00:14:49 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:10:35.305 00:14:49 nvme_scc -- scripts/common.sh@15 -- # local i 00:10:35.305 00:14:49 nvme_scc -- scripts/common.sh@18 -- # [[ =~ 0000:00:12.0 ]] 00:10:35.305 00:14:49 nvme_scc -- scripts/common.sh@22 -- # [[ -z '' ]] 00:10:35.305 00:14:49 nvme_scc -- scripts/common.sh@24 -- # return 0 00:10:35.305 00:14:49 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:10:35.305 00:14:49 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:10:35.305 00:14:49 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:10:35.305 00:14:49 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:35.305 00:14:49 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:10:35.305 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.305 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.305 00:14:49 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:10:35.305 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:35.305 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.305 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.305 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:10:35.305 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:10:35.305 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:10:35.305 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.305 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.305 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:10:35.305 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:10:35.305 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:10:35.305 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.305 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.305 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:10:35.305 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:10:35.305 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:10:35.305 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.305 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.305 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:10:35.305 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:10:35.305 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:10:35.305 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.305 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.305 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:10:35.305 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:10:35.305 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:10:35.305 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.305 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.305 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:10:35.305 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:10:35.305 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:10:35.305 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.305 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.305 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:10:35.305 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:10:35.305 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:10:35.305 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.305 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.305 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.305 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:10:35.305 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:10:35.305 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.305 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.305 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:35.305 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:10:35.305 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:10:35.305 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.305 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.305 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.305 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:10:35.305 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:10:35.305 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.305 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.305 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:10:35.305 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:10:35.305 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:10:35.305 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.305 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.305 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.305 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:10:35.305 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:10:35.305 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.305 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.305 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.305 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:10:35.305 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:10:35.305 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.305 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.305 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:10:35.305 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:10:35.305 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:10:35.305 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.305 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.305 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:10:35.305 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:10:35.305 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:10:35.305 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.305 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.305 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.305 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:10:35.305 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:10:35.305 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.305 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.305 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:35.305 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:10:35.305 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:10:35.305 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.305 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.305 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:10:35.305 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:10:35.305 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:10:35.305 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.305 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.305 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.305 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:10:35.305 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:10:35.305 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.305 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.305 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.305 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:10:35.305 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:10:35.305 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.305 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.305 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.305 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:10:35.305 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:10:35.305 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.305 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.306 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.306 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:10:35.306 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:10:35.306 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.306 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.306 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.306 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:10:35.306 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:10:35.306 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.306 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.306 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.306 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:10:35.306 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:10:35.306 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.306 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.306 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:10:35.306 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:10:35.306 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:10:35.306 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.306 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.306 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:35.306 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:10:35.306 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:10:35.306 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.306 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.306 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:35.306 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:10:35.306 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:10:35.306 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.306 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.306 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:35.306 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:10:35.306 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:10:35.306 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.306 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.306 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:35.306 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:10:35.306 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:10:35.306 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.306 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.306 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.306 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:10:35.306 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:10:35.306 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.306 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.306 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.306 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:10:35.306 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:10:35.306 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.306 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.306 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.306 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:10:35.306 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:10:35.306 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.306 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.306 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.306 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:10:35.306 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:10:35.306 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.306 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.306 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:10:35.306 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:10:35.306 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:10:35.306 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.306 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.306 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:10:35.306 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:10:35.306 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:10:35.306 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.306 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.306 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.306 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:10:35.306 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:10:35.306 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.306 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.306 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.306 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:10:35.306 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:10:35.306 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.306 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.306 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.306 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:10:35.306 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:10:35.306 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.306 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.306 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.306 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:10:35.306 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:10:35.306 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.306 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.306 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.306 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:10:35.306 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:10:35.306 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.306 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.306 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.306 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:10:35.306 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:10:35.306 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.306 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.306 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.306 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:10:35.306 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:10:35.306 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.306 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.306 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.306 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:10:35.306 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:10:35.306 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.306 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.306 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.306 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:10:35.306 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:10:35.306 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.306 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.306 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.306 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:10:35.306 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:10:35.306 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.306 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.306 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.306 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:10:35.306 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:10:35.306 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.306 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.306 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.306 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:10:35.306 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:10:35.306 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.306 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.306 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.306 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:10:35.306 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:10:35.306 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.306 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.306 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.306 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:10:35.306 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:10:35.306 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.306 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.306 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.306 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:10:35.306 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:10:35.306 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.306 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.306 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.306 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:10:35.306 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:10:35.306 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.306 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.306 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.306 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:10:35.306 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:10:35.306 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.306 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.306 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.306 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:10:35.307 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:10:35.307 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.307 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.307 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.307 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:10:35.307 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:10:35.307 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.307 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.307 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.307 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:10:35.307 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:10:35.307 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.307 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.307 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.307 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:10:35.307 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:10:35.307 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.307 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.307 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.307 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:10:35.307 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:10:35.307 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.307 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.307 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.307 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:10:35.307 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:10:35.307 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.307 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.307 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.307 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:10:35.307 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:10:35.307 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.307 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.307 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.307 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:10:35.307 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:10:35.307 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.307 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.307 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:10:35.307 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:10:35.307 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:10:35.307 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.307 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.307 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:10:35.307 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:10:35.307 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:10:35.307 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.307 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.307 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.307 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:10:35.307 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:10:35.307 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.307 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.307 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:10:35.307 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:10:35.307 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:10:35.307 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.307 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.307 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:10:35.307 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:10:35.307 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:10:35.307 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.307 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.307 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.307 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:10:35.307 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:10:35.307 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.307 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.307 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.307 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:10:35.307 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:10:35.307 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.307 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.307 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:35.307 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:10:35.307 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:10:35.307 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.307 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.307 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.307 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:10:35.307 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:10:35.307 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.307 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.307 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.307 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:10:35.307 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:10:35.307 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.307 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.307 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.307 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:10:35.307 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:10:35.307 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.307 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.307 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.307 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:10:35.307 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:10:35.307 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.307 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.307 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.307 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:10:35.307 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:10:35.307 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.307 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.307 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:35.307 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:10:35.307 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:10:35.307 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.307 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.307 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:10:35.307 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:10:35.307 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:10:35.307 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.307 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.307 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.307 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:10:35.307 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:10:35.307 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.307 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.307 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.307 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:10:35.307 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:10:35.307 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.307 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.307 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.307 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:10:35.307 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:10:35.307 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.307 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.307 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:10:35.307 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:10:35.307 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:10:35.307 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.307 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.307 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.307 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:10:35.307 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:10:35.307 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.307 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.307 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.307 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:10:35.307 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:10:35.307 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.307 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.307 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.307 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:10:35.307 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:10:35.307 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.307 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.307 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.307 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:10:35.307 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:10:35.307 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.307 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.307 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.307 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:10:35.307 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:10:35.308 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.308 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.308 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.308 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:10:35.308 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:10:35.308 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.308 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.308 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:10:35.308 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:10:35.308 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:10:35.308 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.308 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.308 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:10:35.308 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:10:35.308 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:10:35.308 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.308 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.308 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:10:35.308 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:10:35.308 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:10:35.308 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.308 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.308 00:14:49 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:10:35.308 00:14:49 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:10:35.308 00:14:49 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:10:35.308 00:14:49 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:10:35.308 00:14:49 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:10:35.308 00:14:49 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:10:35.308 00:14:49 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:35.308 00:14:49 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:10:35.308 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.308 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.308 00:14:49 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:10:35.308 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:35.308 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.308 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.308 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:35.308 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:10:35.308 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:10:35.308 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.308 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.308 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:35.308 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:10:35.308 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:10:35.308 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.308 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.308 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:35.308 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:10:35.308 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:10:35.308 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.308 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.308 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:35.308 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:10:35.308 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:10:35.308 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.308 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.308 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:35.308 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:10:35.308 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:10:35.308 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.308 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.308 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:35.308 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:10:35.308 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:10:35.308 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.308 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.308 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:35.308 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:10:35.308 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:10:35.308 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.308 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.308 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:35.308 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:10:35.308 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:10:35.308 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.308 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.308 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.308 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:10:35.308 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:10:35.308 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.308 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.308 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.308 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:10:35.308 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:10:35.308 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.308 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.308 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.308 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:10:35.308 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:10:35.308 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.308 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.308 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.308 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:10:35.308 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:10:35.308 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.308 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.308 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:35.308 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:10:35.308 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:10:35.308 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.308 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.308 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.308 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:10:35.308 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:10:35.308 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.308 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.308 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.308 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:10:35.308 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:10:35.308 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.308 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.308 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.308 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:10:35.308 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:10:35.308 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.308 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.308 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.308 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:10:35.308 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:10:35.308 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.308 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.308 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.308 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:10:35.308 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:10:35.308 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.308 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.308 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.308 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:10:35.308 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:10:35.308 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.308 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.308 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.308 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:10:35.308 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:10:35.308 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.308 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.308 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.308 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:10:35.308 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:10:35.308 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.308 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.308 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.308 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:10:35.308 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:10:35.308 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.308 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.308 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.308 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:10:35.308 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:10:35.308 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.308 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.308 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.309 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:10:35.309 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:10:35.309 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.309 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.309 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.309 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:10:35.309 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:10:35.309 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.309 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.309 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.309 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:10:35.309 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:10:35.309 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.309 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.309 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:35.309 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:10:35.309 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:10:35.309 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.309 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.309 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:35.309 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:10:35.309 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:10:35.309 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.309 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.309 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:35.309 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:10:35.309 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:10:35.309 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.309 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.309 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.309 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:10:35.309 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:10:35.309 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.309 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.309 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.309 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:10:35.309 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:10:35.309 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.309 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.309 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.309 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:10:35.309 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:10:35.309 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.309 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.309 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.309 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:10:35.309 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:10:35.309 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.309 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.309 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.309 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:10:35.309 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:10:35.309 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.309 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.309 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:35.309 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:10:35.309 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:10:35.309 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.309 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.309 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:35.309 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:10:35.309 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:10:35.309 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.309 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.309 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:35.309 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:35.309 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:35.309 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.309 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.309 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:35.309 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:35.309 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:35.309 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.309 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.309 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:35.309 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:35.309 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:35.309 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.309 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.309 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:35.309 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:35.309 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:35.309 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.309 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.309 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:35.309 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:35.309 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:35.309 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.309 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.309 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:35.309 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:35.309 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:35.309 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.309 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.309 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:35.309 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:35.309 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:35.309 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.309 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.309 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:35.309 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:35.309 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:35.309 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.309 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.309 00:14:49 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:10:35.309 00:14:49 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:10:35.309 00:14:49 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:10:35.309 00:14:49 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:10:35.309 00:14:49 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:10:35.309 00:14:49 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:10:35.309 00:14:49 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:35.309 00:14:49 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:10:35.309 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.309 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.309 00:14:49 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:10:35.309 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:35.309 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.309 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.309 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:35.309 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:10:35.309 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:10:35.309 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.309 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.309 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:35.309 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:10:35.309 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:10:35.309 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.309 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.309 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:35.309 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:10:35.309 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:10:35.309 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.310 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.310 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:35.310 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:10:35.310 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:10:35.310 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.310 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.310 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:35.310 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:10:35.310 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:10:35.310 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.310 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.310 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:35.310 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:10:35.310 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:10:35.310 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.310 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.310 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:35.310 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:10:35.310 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:10:35.310 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.310 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.310 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:35.310 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:10:35.310 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:10:35.310 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.310 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.310 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.310 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:10:35.310 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:10:35.310 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.310 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.310 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.310 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:10:35.310 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:10:35.310 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.310 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.310 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.310 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:10:35.310 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:10:35.310 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.310 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.310 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.310 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:10:35.310 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:10:35.310 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.310 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.310 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:35.310 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:10:35.310 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:10:35.310 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.310 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.310 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.310 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:10:35.310 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:10:35.310 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.310 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.310 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.310 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:10:35.310 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:10:35.310 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.310 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.310 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.310 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:10:35.310 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:10:35.310 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.310 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.310 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.310 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:10:35.310 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:10:35.310 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.310 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.310 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.310 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:10:35.310 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:10:35.310 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.310 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.310 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.310 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:10:35.310 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:10:35.310 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.310 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.310 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.310 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:10:35.310 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:10:35.310 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.310 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.310 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.310 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:10:35.310 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:10:35.310 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.310 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.310 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.310 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:10:35.310 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:10:35.310 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.310 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.310 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.310 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:10:35.310 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:10:35.310 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.310 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.310 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.310 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:10:35.310 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:10:35.310 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.310 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.310 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.310 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:10:35.310 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:10:35.310 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.310 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.310 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.310 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:10:35.310 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:10:35.310 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.310 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.310 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:35.310 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:10:35.310 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:10:35.310 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.310 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.310 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:35.310 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:10:35.310 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:10:35.310 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.310 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.310 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:35.310 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:10:35.310 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:10:35.310 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.310 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.310 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.310 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:10:35.310 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:10:35.310 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.310 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.310 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.310 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:10:35.310 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:10:35.310 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.310 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.310 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.310 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:10:35.310 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:10:35.310 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.310 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.310 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.311 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:10:35.311 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:10:35.311 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.311 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.311 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.311 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:10:35.311 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:10:35.311 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.311 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.311 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:35.311 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:10:35.311 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:10:35.311 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.311 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.311 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:35.311 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:10:35.311 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:10:35.311 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.311 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.311 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:35.311 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:35.311 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:35.311 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.311 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.311 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:35.311 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:35.311 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:35.311 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.311 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.311 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:35.311 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:35.311 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:35.311 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.311 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.311 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:35.311 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:35.311 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:35.311 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.311 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.311 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:35.311 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:35.311 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:35.311 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.311 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.311 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:35.311 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:35.311 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:35.311 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.311 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.311 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:35.311 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:35.311 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:35.311 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.311 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.311 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:35.311 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:35.311 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:35.311 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.311 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.311 00:14:49 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:10:35.311 00:14:49 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:10:35.311 00:14:49 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:10:35.311 00:14:49 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:10:35.311 00:14:49 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:10:35.311 00:14:49 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:10:35.311 00:14:49 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:35.311 00:14:49 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:10:35.311 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.311 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.311 00:14:49 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:10:35.311 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:35.311 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.311 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.311 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:35.311 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:10:35.311 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:10:35.311 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.311 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.311 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:35.311 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:10:35.311 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:10:35.311 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.311 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.311 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:35.311 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:10:35.311 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:10:35.311 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.311 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.311 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:35.311 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:10:35.311 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:10:35.311 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.311 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.311 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:35.311 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:10:35.311 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:10:35.311 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.311 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.311 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:35.311 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:10:35.311 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:10:35.311 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.311 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.311 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:35.311 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:10:35.311 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:10:35.311 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.311 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.311 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:35.311 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:10:35.311 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:10:35.311 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.311 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.311 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.311 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:10:35.311 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:10:35.311 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.311 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.311 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.311 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:10:35.311 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:10:35.311 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.311 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.311 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.311 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:10:35.311 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:10:35.311 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.311 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.311 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.311 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:10:35.311 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:10:35.311 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.311 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.311 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:35.311 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:10:35.311 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:10:35.311 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.311 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.311 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.311 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:10:35.311 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:10:35.311 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.311 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.311 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.311 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:10:35.311 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:10:35.311 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.311 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.312 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.312 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:10:35.312 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:10:35.312 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.312 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.312 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.312 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:10:35.312 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:10:35.312 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.312 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.312 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.312 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:10:35.312 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:10:35.312 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.312 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.312 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.312 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:10:35.312 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:10:35.312 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.312 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.312 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.312 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:10:35.312 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:10:35.312 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.312 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.312 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.312 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:10:35.312 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:10:35.312 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.312 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.312 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.312 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:10:35.312 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:10:35.312 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.312 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.312 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.312 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:10:35.312 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:10:35.312 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.312 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.312 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.312 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:10:35.312 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:10:35.312 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.312 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.312 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.312 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:10:35.312 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:10:35.312 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.312 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.312 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.312 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:10:35.312 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:10:35.312 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.312 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.312 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:35.312 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:10:35.312 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:10:35.312 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.312 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.312 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:35.312 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:10:35.312 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:10:35.312 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.312 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.312 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:35.312 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:10:35.312 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:10:35.312 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.312 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.312 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.312 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:10:35.312 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:10:35.312 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.312 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.312 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.312 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:10:35.312 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:10:35.312 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.312 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.312 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.312 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:10:35.312 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:10:35.312 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.312 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.312 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.312 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:10:35.312 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:10:35.312 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.312 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.312 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.312 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:10:35.312 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:10:35.312 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.312 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.312 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:35.312 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:10:35.312 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:10:35.312 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.312 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.312 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:35.312 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:10:35.312 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:10:35.312 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.312 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.312 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:35.312 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:35.312 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:35.312 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.312 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.312 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:35.312 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:35.312 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:35.312 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.312 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.312 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:35.312 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:35.312 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:35.312 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.312 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.312 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:35.312 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:35.312 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:35.312 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.312 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.312 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:35.312 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:35.312 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:35.312 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.312 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.312 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:35.312 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:35.312 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:35.312 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.312 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.312 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:35.312 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:35.312 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:35.312 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.312 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.312 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:35.312 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:35.312 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:35.312 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.312 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.312 00:14:49 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:10:35.312 00:14:49 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:10:35.312 00:14:49 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:10:35.312 00:14:49 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:10:35.313 00:14:49 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:10:35.313 00:14:49 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:10:35.313 00:14:49 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:10:35.313 00:14:49 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:10:35.313 00:14:49 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:10:35.313 00:14:49 nvme_scc -- scripts/common.sh@15 -- # local i 00:10:35.313 00:14:49 nvme_scc -- scripts/common.sh@18 -- # [[ =~ 0000:00:13.0 ]] 00:10:35.313 00:14:49 nvme_scc -- scripts/common.sh@22 -- # [[ -z '' ]] 00:10:35.313 00:14:49 nvme_scc -- scripts/common.sh@24 -- # return 0 00:10:35.313 00:14:49 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:10:35.313 00:14:49 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:10:35.313 00:14:49 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:10:35.313 00:14:49 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:35.313 00:14:49 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:10:35.313 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.313 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.313 00:14:49 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:10:35.574 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:35.574 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.574 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.574 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:10:35.574 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:10:35.574 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:10:35.574 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.574 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.574 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:10:35.574 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:10:35.574 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:10:35.574 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.574 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.574 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:10:35.574 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:10:35.574 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:10:35.574 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.574 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.574 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:10:35.574 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:10:35.574 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:10:35.574 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.574 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.574 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:10:35.574 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:10:35.574 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:10:35.574 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.574 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.574 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:10:35.574 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:10:35.574 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:10:35.574 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.574 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.574 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:10:35.574 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:10:35.574 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:10:35.574 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.574 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.574 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:10:35.574 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:10:35.574 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:10:35.574 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.574 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.574 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:35.574 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:10:35.574 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:10:35.574 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.574 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.574 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.574 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:10:35.574 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:10:35.574 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.574 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.574 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:10:35.574 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:10:35.574 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:10:35.574 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.574 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.574 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.574 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:10:35.574 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:10:35.574 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.574 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.574 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.574 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:10:35.574 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:10:35.574 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.574 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.574 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:10:35.574 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:10:35.574 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:10:35.574 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.574 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.574 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:10:35.574 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:10:35.574 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:10:35.574 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.574 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.574 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.574 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:10:35.574 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:10:35.574 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.574 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.574 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:35.574 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:10:35.574 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:10:35.574 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.574 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.574 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:10:35.574 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:10:35.574 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:10:35.574 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.574 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.574 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.574 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:10:35.574 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:10:35.574 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.574 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.574 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.574 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:10:35.574 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:10:35.574 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.574 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.574 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.574 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:10:35.574 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:10:35.574 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.574 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.574 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.574 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:10:35.574 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:10:35.574 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.574 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.574 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.574 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:10:35.574 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:10:35.574 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.574 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.574 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.574 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:10:35.574 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:10:35.574 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.574 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.574 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:10:35.574 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:10:35.574 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:10:35.574 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.574 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.574 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:35.574 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:10:35.574 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:10:35.574 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.574 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.574 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:35.574 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:10:35.574 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:10:35.574 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.574 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.574 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:35.574 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:10:35.574 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:10:35.574 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.574 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.574 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:35.574 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:10:35.574 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:10:35.575 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.575 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.575 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.575 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:10:35.575 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:10:35.575 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.575 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.575 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.575 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:10:35.575 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:10:35.575 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.575 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.575 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.575 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:10:35.575 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:10:35.575 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.575 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.575 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.575 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:10:35.575 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:10:35.575 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.575 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.575 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:10:35.575 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:10:35.575 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:10:35.575 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.575 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.575 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:10:35.575 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:10:35.575 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:10:35.575 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.575 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.575 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.575 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:10:35.575 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:10:35.575 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.575 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.575 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.575 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:10:35.575 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:10:35.575 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.575 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.575 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.575 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:10:35.575 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:10:35.575 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.575 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.575 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.575 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:10:35.575 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:10:35.575 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.575 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.575 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.575 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:10:35.575 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:10:35.575 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.575 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.575 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.575 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:10:35.575 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:10:35.575 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.575 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.575 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.575 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:10:35.575 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:10:35.575 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.575 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.575 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.575 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:10:35.575 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:10:35.575 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.575 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.575 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.575 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:10:35.575 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:10:35.575 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.575 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.575 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.575 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:10:35.575 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:10:35.575 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.575 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.575 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.575 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:10:35.575 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:10:35.575 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.575 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.575 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.575 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:10:35.575 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:10:35.575 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.575 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.575 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.575 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:10:35.575 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:10:35.575 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.575 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.575 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.575 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:10:35.575 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:10:35.575 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.575 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.575 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.575 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:10:35.575 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:10:35.575 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.575 00:14:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.575 00:14:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.575 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:10:35.575 00:14:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:10:35.575 00:14:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.575 00:14:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.575 00:14:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.575 00:14:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:10:35.575 00:14:50 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:10:35.575 00:14:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.575 00:14:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.575 00:14:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:35.575 00:14:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:10:35.575 00:14:50 nvme_scc -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:10:35.575 00:14:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.575 00:14:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.575 00:14:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.575 00:14:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:10:35.575 00:14:50 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:10:35.575 00:14:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.575 00:14:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.575 00:14:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.575 00:14:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:10:35.575 00:14:50 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:10:35.575 00:14:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.575 00:14:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.575 00:14:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.575 00:14:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:10:35.575 00:14:50 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:10:35.575 00:14:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.575 00:14:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.575 00:14:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.575 00:14:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:10:35.575 00:14:50 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:10:35.575 00:14:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.575 00:14:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.575 00:14:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.575 00:14:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:10:35.575 00:14:50 nvme_scc -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:10:35.575 00:14:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.575 00:14:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.575 00:14:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.575 00:14:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:10:35.575 00:14:50 nvme_scc -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:10:35.575 00:14:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.575 00:14:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.575 00:14:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.575 00:14:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:10:35.575 00:14:50 nvme_scc -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:10:35.576 00:14:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.576 00:14:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.576 00:14:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:10:35.576 00:14:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:10:35.576 00:14:50 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:10:35.576 00:14:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.576 00:14:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.576 00:14:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:10:35.576 00:14:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:10:35.576 00:14:50 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:10:35.576 00:14:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.576 00:14:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.576 00:14:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.576 00:14:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:10:35.576 00:14:50 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:10:35.576 00:14:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.576 00:14:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.576 00:14:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:10:35.576 00:14:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:10:35.576 00:14:50 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:10:35.576 00:14:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.576 00:14:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.576 00:14:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:10:35.576 00:14:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:10:35.576 00:14:50 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:10:35.576 00:14:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.576 00:14:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.576 00:14:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.576 00:14:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:10:35.576 00:14:50 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:10:35.576 00:14:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.576 00:14:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.576 00:14:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.576 00:14:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:10:35.576 00:14:50 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:10:35.576 00:14:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.576 00:14:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.576 00:14:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:35.576 00:14:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:10:35.576 00:14:50 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:10:35.576 00:14:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.576 00:14:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.576 00:14:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.576 00:14:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:10:35.576 00:14:50 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:10:35.576 00:14:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.576 00:14:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.576 00:14:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.576 00:14:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:10:35.576 00:14:50 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:10:35.576 00:14:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.576 00:14:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.576 00:14:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.576 00:14:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:10:35.576 00:14:50 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:10:35.576 00:14:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.576 00:14:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.576 00:14:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.576 00:14:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:10:35.576 00:14:50 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:10:35.576 00:14:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.576 00:14:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.576 00:14:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.576 00:14:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:10:35.576 00:14:50 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:10:35.576 00:14:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.576 00:14:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.576 00:14:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:35.576 00:14:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:10:35.576 00:14:50 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:10:35.576 00:14:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.576 00:14:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.576 00:14:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:10:35.576 00:14:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:10:35.576 00:14:50 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:10:35.576 00:14:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.576 00:14:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.576 00:14:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.576 00:14:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:10:35.576 00:14:50 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:10:35.576 00:14:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.576 00:14:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.576 00:14:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.576 00:14:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:10:35.576 00:14:50 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:10:35.576 00:14:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.576 00:14:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.576 00:14:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.576 00:14:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:10:35.576 00:14:50 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:10:35.576 00:14:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.576 00:14:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.576 00:14:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:10:35.576 00:14:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:10:35.576 00:14:50 nvme_scc -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:10:35.576 00:14:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.576 00:14:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.576 00:14:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.576 00:14:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:10:35.576 00:14:50 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:10:35.576 00:14:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.576 00:14:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.576 00:14:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.576 00:14:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:10:35.576 00:14:50 nvme_scc -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:10:35.576 00:14:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.576 00:14:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.576 00:14:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.576 00:14:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:10:35.576 00:14:50 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:10:35.576 00:14:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.576 00:14:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.576 00:14:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.576 00:14:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:10:35.576 00:14:50 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:10:35.576 00:14:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.576 00:14:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.576 00:14:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.576 00:14:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:10:35.576 00:14:50 nvme_scc -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:10:35.576 00:14:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.576 00:14:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.576 00:14:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.576 00:14:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:10:35.576 00:14:50 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:10:35.576 00:14:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.576 00:14:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.576 00:14:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:10:35.576 00:14:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:10:35.576 00:14:50 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:10:35.576 00:14:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.576 00:14:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.576 00:14:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:10:35.576 00:14:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:10:35.576 00:14:50 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:10:35.576 00:14:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.576 00:14:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.576 00:14:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:10:35.576 00:14:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:10:35.576 00:14:50 nvme_scc -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:10:35.576 00:14:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.576 00:14:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.576 00:14:50 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:10:35.576 00:14:50 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:10:35.576 00:14:50 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:10:35.576 00:14:50 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:10:35.576 00:14:50 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:10:35.576 00:14:50 nvme_scc -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:10:35.576 00:14:50 nvme_scc -- nvme/nvme_scc.sh@17 -- # get_ctrl_with_feature scc 00:10:35.576 00:14:50 nvme_scc -- nvme/functions.sh@202 -- # local _ctrls feature=scc 00:10:35.576 00:14:50 nvme_scc -- nvme/functions.sh@204 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:10:35.576 00:14:50 nvme_scc -- nvme/functions.sh@204 -- # get_ctrls_with_feature scc 00:10:35.577 00:14:50 nvme_scc -- nvme/functions.sh@190 -- # (( 4 == 0 )) 00:10:35.577 00:14:50 nvme_scc -- nvme/functions.sh@192 -- # local ctrl feature=scc 00:10:35.577 00:14:50 nvme_scc -- nvme/functions.sh@194 -- # type -t ctrl_has_scc 00:10:35.577 00:14:50 nvme_scc -- nvme/functions.sh@194 -- # [[ function == function ]] 00:10:35.577 00:14:50 nvme_scc -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:10:35.577 00:14:50 nvme_scc -- nvme/functions.sh@197 -- # ctrl_has_scc nvme1 00:10:35.577 00:14:50 nvme_scc -- nvme/functions.sh@182 -- # local ctrl=nvme1 oncs 00:10:35.577 00:14:50 nvme_scc -- nvme/functions.sh@184 -- # get_oncs nvme1 00:10:35.577 00:14:50 nvme_scc -- nvme/functions.sh@169 -- # local ctrl=nvme1 00:10:35.577 00:14:50 nvme_scc -- nvme/functions.sh@170 -- # get_nvme_ctrl_feature nvme1 oncs 00:10:35.577 00:14:50 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=oncs 00:10:35.577 00:14:50 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:10:35.577 00:14:50 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:10:35.577 00:14:50 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:10:35.577 00:14:50 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:10:35.577 00:14:50 nvme_scc -- nvme/functions.sh@184 -- # oncs=0x15d 00:10:35.577 00:14:50 nvme_scc -- nvme/functions.sh@186 -- # (( oncs & 1 << 8 )) 00:10:35.577 00:14:50 nvme_scc -- nvme/functions.sh@197 -- # echo nvme1 00:10:35.577 00:14:50 nvme_scc -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:10:35.577 00:14:50 nvme_scc -- nvme/functions.sh@197 -- # ctrl_has_scc nvme0 00:10:35.577 00:14:50 nvme_scc -- nvme/functions.sh@182 -- # local ctrl=nvme0 oncs 00:10:35.577 00:14:50 nvme_scc -- nvme/functions.sh@184 -- # get_oncs nvme0 00:10:35.577 00:14:50 nvme_scc -- nvme/functions.sh@169 -- # local ctrl=nvme0 00:10:35.577 00:14:50 nvme_scc -- nvme/functions.sh@170 -- # get_nvme_ctrl_feature nvme0 oncs 00:10:35.577 00:14:50 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=oncs 00:10:35.577 00:14:50 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:10:35.577 00:14:50 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:10:35.577 00:14:50 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:10:35.577 00:14:50 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:10:35.577 00:14:50 nvme_scc -- nvme/functions.sh@184 -- # oncs=0x15d 00:10:35.577 00:14:50 nvme_scc -- nvme/functions.sh@186 -- # (( oncs & 1 << 8 )) 00:10:35.577 00:14:50 nvme_scc -- nvme/functions.sh@197 -- # echo nvme0 00:10:35.577 00:14:50 nvme_scc -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:10:35.577 00:14:50 nvme_scc -- nvme/functions.sh@197 -- # ctrl_has_scc nvme3 00:10:35.577 00:14:50 nvme_scc -- nvme/functions.sh@182 -- # local ctrl=nvme3 oncs 00:10:35.577 00:14:50 nvme_scc -- nvme/functions.sh@184 -- # get_oncs nvme3 00:10:35.577 00:14:50 nvme_scc -- nvme/functions.sh@169 -- # local ctrl=nvme3 00:10:35.577 00:14:50 nvme_scc -- nvme/functions.sh@170 -- # get_nvme_ctrl_feature nvme3 oncs 00:10:35.577 00:14:50 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=oncs 00:10:35.577 00:14:50 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:10:35.577 00:14:50 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:10:35.577 00:14:50 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:10:35.577 00:14:50 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:10:35.577 00:14:50 nvme_scc -- nvme/functions.sh@184 -- # oncs=0x15d 00:10:35.577 00:14:50 nvme_scc -- nvme/functions.sh@186 -- # (( oncs & 1 << 8 )) 00:10:35.577 00:14:50 nvme_scc -- nvme/functions.sh@197 -- # echo nvme3 00:10:35.577 00:14:50 nvme_scc -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:10:35.577 00:14:50 nvme_scc -- nvme/functions.sh@197 -- # ctrl_has_scc nvme2 00:10:35.577 00:14:50 nvme_scc -- nvme/functions.sh@182 -- # local ctrl=nvme2 oncs 00:10:35.577 00:14:50 nvme_scc -- nvme/functions.sh@184 -- # get_oncs nvme2 00:10:35.577 00:14:50 nvme_scc -- nvme/functions.sh@169 -- # local ctrl=nvme2 00:10:35.577 00:14:50 nvme_scc -- nvme/functions.sh@170 -- # get_nvme_ctrl_feature nvme2 oncs 00:10:35.577 00:14:50 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=oncs 00:10:35.577 00:14:50 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:10:35.577 00:14:50 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:10:35.577 00:14:50 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:10:35.577 00:14:50 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:10:35.577 00:14:50 nvme_scc -- nvme/functions.sh@184 -- # oncs=0x15d 00:10:35.577 00:14:50 nvme_scc -- nvme/functions.sh@186 -- # (( oncs & 1 << 8 )) 00:10:35.577 00:14:50 nvme_scc -- nvme/functions.sh@197 -- # echo nvme2 00:10:35.577 00:14:50 nvme_scc -- nvme/functions.sh@205 -- # (( 4 > 0 )) 00:10:35.577 00:14:50 nvme_scc -- nvme/functions.sh@206 -- # echo nvme1 00:10:35.577 00:14:50 nvme_scc -- nvme/functions.sh@207 -- # return 0 00:10:35.577 00:14:50 nvme_scc -- nvme/nvme_scc.sh@17 -- # ctrl=nvme1 00:10:35.577 00:14:50 nvme_scc -- nvme/nvme_scc.sh@17 -- # bdf=0000:00:10.0 00:10:35.577 00:14:50 nvme_scc -- nvme/nvme_scc.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:10:36.144 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:36.713 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:10:36.713 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:10:36.713 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:10:36.972 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:10:36.972 00:14:51 nvme_scc -- nvme/nvme_scc.sh@21 -- # run_test nvme_simple_copy /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:10:36.972 00:14:51 nvme_scc -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:10:36.972 00:14:51 nvme_scc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:10:36.972 00:14:51 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:10:36.972 ************************************ 00:10:36.972 START TEST nvme_simple_copy 00:10:36.972 ************************************ 00:10:36.972 00:14:51 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:10:37.231 Initializing NVMe Controllers 00:10:37.231 Attaching to 0000:00:10.0 00:10:37.231 Controller supports SCC. Attached to 0000:00:10.0 00:10:37.231 Namespace ID: 1 size: 6GB 00:10:37.231 Initialization complete. 00:10:37.231 00:10:37.231 Controller QEMU NVMe Ctrl (12340 ) 00:10:37.231 Controller PCI vendor:6966 PCI subsystem vendor:6900 00:10:37.231 Namespace Block Size:4096 00:10:37.231 Writing LBAs 0 to 63 with Random Data 00:10:37.231 Copied LBAs from 0 - 63 to the Destination LBA 256 00:10:37.231 LBAs matching Written Data: 64 00:10:37.231 ************************************ 00:10:37.231 END TEST nvme_simple_copy 00:10:37.231 ************************************ 00:10:37.231 00:10:37.231 real 0m0.265s 00:10:37.231 user 0m0.081s 00:10:37.231 sys 0m0.083s 00:10:37.231 00:14:51 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1122 -- # xtrace_disable 00:10:37.231 00:14:51 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@10 -- # set +x 00:10:37.231 ************************************ 00:10:37.231 END TEST nvme_scc 00:10:37.231 ************************************ 00:10:37.231 00:10:37.231 real 0m8.721s 00:10:37.231 user 0m1.410s 00:10:37.231 sys 0m2.326s 00:10:37.231 00:14:51 nvme_scc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:10:37.231 00:14:51 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:10:37.490 00:14:51 -- spdk/autotest.sh@223 -- # [[ 0 -eq 1 ]] 00:10:37.490 00:14:51 -- spdk/autotest.sh@226 -- # [[ 0 -eq 1 ]] 00:10:37.490 00:14:51 -- spdk/autotest.sh@229 -- # [[ '' -eq 1 ]] 00:10:37.490 00:14:51 -- spdk/autotest.sh@232 -- # [[ 1 -eq 1 ]] 00:10:37.490 00:14:51 -- spdk/autotest.sh@233 -- # run_test nvme_fdp test/nvme/nvme_fdp.sh 00:10:37.490 00:14:51 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:10:37.490 00:14:51 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:10:37.490 00:14:51 -- common/autotest_common.sh@10 -- # set +x 00:10:37.490 ************************************ 00:10:37.490 START TEST nvme_fdp 00:10:37.490 ************************************ 00:10:37.490 00:14:51 nvme_fdp -- common/autotest_common.sh@1121 -- # test/nvme/nvme_fdp.sh 00:10:37.490 * Looking for test storage... 00:10:37.490 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:10:37.490 00:14:52 nvme_fdp -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:10:37.490 00:14:52 nvme_fdp -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:10:37.490 00:14:52 nvme_fdp -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:10:37.490 00:14:52 nvme_fdp -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:10:37.490 00:14:52 nvme_fdp -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:37.490 00:14:52 nvme_fdp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:37.490 00:14:52 nvme_fdp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:37.490 00:14:52 nvme_fdp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:37.490 00:14:52 nvme_fdp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:37.490 00:14:52 nvme_fdp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:37.490 00:14:52 nvme_fdp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:37.490 00:14:52 nvme_fdp -- paths/export.sh@5 -- # export PATH 00:10:37.490 00:14:52 nvme_fdp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:37.490 00:14:52 nvme_fdp -- nvme/functions.sh@10 -- # ctrls=() 00:10:37.490 00:14:52 nvme_fdp -- nvme/functions.sh@10 -- # declare -A ctrls 00:10:37.490 00:14:52 nvme_fdp -- nvme/functions.sh@11 -- # nvmes=() 00:10:37.490 00:14:52 nvme_fdp -- nvme/functions.sh@11 -- # declare -A nvmes 00:10:37.491 00:14:52 nvme_fdp -- nvme/functions.sh@12 -- # bdfs=() 00:10:37.491 00:14:52 nvme_fdp -- nvme/functions.sh@12 -- # declare -A bdfs 00:10:37.491 00:14:52 nvme_fdp -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:10:37.491 00:14:52 nvme_fdp -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:10:37.491 00:14:52 nvme_fdp -- nvme/functions.sh@14 -- # nvme_name= 00:10:37.491 00:14:52 nvme_fdp -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:37.491 00:14:52 nvme_fdp -- nvme/nvme_fdp.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:10:38.058 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:38.317 Waiting for block devices as requested 00:10:38.317 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:10:38.586 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:10:38.586 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:10:38.870 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:10:44.154 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:10:44.154 00:14:58 nvme_fdp -- nvme/nvme_fdp.sh@12 -- # scan_nvme_ctrls 00:10:44.154 00:14:58 nvme_fdp -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:10:44.154 00:14:58 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:10:44.154 00:14:58 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:10:44.154 00:14:58 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:10:44.154 00:14:58 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:10:44.154 00:14:58 nvme_fdp -- scripts/common.sh@15 -- # local i 00:10:44.154 00:14:58 nvme_fdp -- scripts/common.sh@18 -- # [[ =~ 0000:00:11.0 ]] 00:10:44.154 00:14:58 nvme_fdp -- scripts/common.sh@22 -- # [[ -z '' ]] 00:10:44.154 00:14:58 nvme_fdp -- scripts/common.sh@24 -- # return 0 00:10:44.154 00:14:58 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:10:44.154 00:14:58 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:10:44.154 00:14:58 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:10:44.154 00:14:58 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:44.154 00:14:58 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:10:44.154 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.154 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.154 00:14:58 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:10:44.154 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:44.154 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.154 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.154 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:10:44.154 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:10:44.154 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:10:44.154 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.154 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.154 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:10:44.154 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:10:44.154 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:10:44.154 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.154 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.154 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:10:44.154 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:10:44.154 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:10:44.154 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.154 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.154 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:10:44.154 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:10:44.154 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:10:44.154 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.154 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.154 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:10:44.154 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:10:44.154 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:10:44.154 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.154 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.154 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:10:44.154 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:10:44.154 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:10:44.154 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.154 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.154 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:10:44.154 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:10:44.154 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:10:44.154 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.154 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.154 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:44.154 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:10:44.154 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:10:44.154 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.154 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.154 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:44.154 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:10:44.154 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:10:44.154 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.154 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.154 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:44.154 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:10:44.154 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:10:44.154 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.154 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.154 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:10:44.154 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:10:44.154 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:10:44.154 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.154 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.154 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:44.154 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:10:44.154 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:10:44.155 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.155 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.155 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:44.155 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:10:44.155 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:10:44.155 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.155 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.155 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:10:44.155 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:10:44.155 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:10:44.155 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.155 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.155 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:10:44.155 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:10:44.155 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:10:44.155 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.155 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.155 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:44.155 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:10:44.155 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:10:44.155 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.155 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.155 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:44.155 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:10:44.155 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:10:44.155 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.155 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.155 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:10:44.155 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:10:44.155 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:10:44.155 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.155 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.155 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:44.155 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:10:44.155 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:10:44.155 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.155 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.155 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:44.155 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:10:44.155 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:10:44.155 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.155 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.155 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:44.155 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:10:44.155 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:10:44.155 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.155 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.155 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:44.155 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:10:44.155 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:10:44.155 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.155 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.155 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:44.155 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:10:44.155 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:10:44.155 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.155 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.155 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:44.155 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:10:44.155 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:10:44.155 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.155 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.155 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:10:44.155 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:10:44.155 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:10:44.155 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.155 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.155 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:44.155 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:10:44.155 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:10:44.155 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.155 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.155 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:44.155 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:10:44.155 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:10:44.155 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.155 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.155 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:44.155 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:10:44.155 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:10:44.155 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.155 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.155 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:44.155 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:10:44.155 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:10:44.155 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.155 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.155 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:44.155 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:10:44.155 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:10:44.155 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.155 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.155 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:44.155 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:10:44.155 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:10:44.155 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.155 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.155 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:44.155 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:10:44.155 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:10:44.155 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.155 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.155 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:44.155 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:10:44.155 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:10:44.155 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.155 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.155 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:10:44.155 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:10:44.155 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:10:44.155 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.155 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.155 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:10:44.155 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:10:44.155 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:10:44.155 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.155 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.155 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:44.155 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:10:44.155 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:10:44.155 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.155 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.155 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:44.155 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:10:44.155 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:10:44.155 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.155 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.155 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:44.155 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:10:44.155 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:10:44.155 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.155 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.155 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:44.155 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:10:44.155 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:10:44.155 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.155 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.155 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:44.155 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:10:44.155 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:10:44.155 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.155 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.155 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:44.155 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:10:44.155 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:10:44.155 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.155 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.155 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:44.155 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:10:44.155 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:10:44.155 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.155 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.156 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:44.156 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:10:44.156 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:10:44.156 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.156 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.156 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:44.156 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:10:44.156 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:10:44.156 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.156 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.156 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:44.156 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:10:44.156 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:10:44.156 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.156 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.156 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:44.156 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:10:44.156 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:10:44.156 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.156 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.156 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:44.156 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:10:44.156 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:10:44.156 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.156 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.156 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:44.156 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:10:44.156 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:10:44.156 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.156 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.156 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:44.156 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:10:44.156 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:10:44.156 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.156 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.156 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:44.156 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:10:44.156 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:10:44.156 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.156 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.156 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:44.156 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:10:44.156 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:10:44.156 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.156 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.156 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:44.156 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:10:44.156 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:10:44.156 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.156 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.156 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:44.156 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:10:44.156 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:10:44.156 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.156 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.156 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:44.156 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:10:44.156 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:10:44.156 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.156 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.156 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:44.156 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:10:44.156 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:10:44.156 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.156 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.156 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:44.156 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:10:44.156 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:10:44.156 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.156 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.156 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:44.156 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:10:44.156 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:10:44.156 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.156 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.156 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:44.156 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:10:44.156 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:10:44.156 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.156 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.156 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:44.156 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:10:44.156 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:10:44.156 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.156 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.156 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:44.156 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:10:44.156 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:10:44.156 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.156 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.156 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:10:44.156 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:10:44.156 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:10:44.156 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.156 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.156 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:10:44.156 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:10:44.156 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:10:44.156 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.156 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.156 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:44.156 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:10:44.156 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:10:44.156 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.156 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.156 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:10:44.156 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:10:44.156 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:10:44.156 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.156 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.156 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:10:44.156 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:10:44.156 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:10:44.156 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.156 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.156 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:44.156 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:10:44.156 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:10:44.156 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.156 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.156 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:44.156 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:10:44.156 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:10:44.156 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.156 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.156 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:44.156 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:10:44.156 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:10:44.156 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.156 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.156 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:44.156 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:10:44.156 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:10:44.156 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.156 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.156 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:44.156 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:10:44.156 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:10:44.156 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.156 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.156 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:44.156 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:10:44.156 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:10:44.156 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.156 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.156 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:44.156 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:10:44.156 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:10:44.156 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.156 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.156 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:44.156 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:10:44.157 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:10:44.157 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.157 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.157 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:44.157 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:10:44.157 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:10:44.157 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.157 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.157 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:10:44.157 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:10:44.157 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:10:44.157 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.157 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.157 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:44.157 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:10:44.157 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:10:44.157 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.157 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.157 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:44.157 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:10:44.157 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:10:44.157 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.157 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.157 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:44.157 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:10:44.157 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:10:44.157 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.157 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.157 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:10:44.157 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:10:44.157 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:10:44.157 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.157 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.157 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:44.157 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:10:44.157 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:10:44.157 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.157 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.157 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:44.157 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:10:44.157 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:10:44.157 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.157 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.157 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:44.157 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:10:44.157 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:10:44.157 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.157 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.157 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:44.157 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:10:44.157 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:10:44.157 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.157 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.157 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:44.157 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:10:44.157 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:10:44.157 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.157 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.157 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:44.157 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:10:44.157 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:10:44.157 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.157 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.157 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:10:44.157 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:10:44.157 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:10:44.157 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.157 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.157 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:10:44.157 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:10:44.157 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:10:44.157 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.157 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.157 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:10:44.157 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:10:44.157 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:10:44.157 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.157 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.157 00:14:58 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:10:44.157 00:14:58 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:10:44.157 00:14:58 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:10:44.157 00:14:58 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:10:44.157 00:14:58 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:10:44.157 00:14:58 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:10:44.157 00:14:58 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:44.157 00:14:58 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:10:44.157 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.157 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.157 00:14:58 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:10:44.157 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:44.157 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.157 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.157 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:10:44.157 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:10:44.157 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:10:44.157 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.157 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.157 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:10:44.157 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:10:44.157 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:10:44.157 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.157 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.157 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:10:44.157 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:10:44.157 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:10:44.157 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.157 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.157 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:44.157 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:10:44.157 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:10:44.157 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.157 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.157 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:44.157 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:10:44.157 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:10:44.157 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.157 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.157 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:44.157 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:10:44.157 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:10:44.157 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.157 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.157 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:44.157 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:10:44.157 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:10:44.157 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.157 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.157 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:44.157 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:10:44.157 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:10:44.157 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.157 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.157 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:44.157 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:10:44.157 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:10:44.157 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.157 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.157 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:44.157 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:10:44.157 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:10:44.157 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.157 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.157 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:44.157 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:10:44.157 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:10:44.157 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.157 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.157 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:44.157 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:10:44.157 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:10:44.157 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.158 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.158 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:44.158 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:10:44.158 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:10:44.158 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.158 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.158 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:44.158 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:10:44.158 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:10:44.158 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.158 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.158 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:44.158 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:10:44.158 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:10:44.158 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.158 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.158 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:44.158 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:10:44.158 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:10:44.158 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.158 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.158 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:44.158 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:10:44.158 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:10:44.158 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.158 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.158 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:44.158 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:10:44.158 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:10:44.158 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.158 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.158 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:44.158 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:10:44.158 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:10:44.158 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.158 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.158 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:44.158 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:10:44.158 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:10:44.158 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.158 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.158 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:44.158 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:10:44.158 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:10:44.158 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.158 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.158 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:44.158 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:10:44.158 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:10:44.158 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.158 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.158 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:44.158 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:10:44.158 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:10:44.158 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.158 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.158 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:44.158 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:10:44.158 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:10:44.158 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.158 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.158 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:44.158 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:10:44.158 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:10:44.158 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.158 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.158 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:44.158 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:10:44.158 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:10:44.158 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.158 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.158 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:44.158 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:10:44.158 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:10:44.158 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.158 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.158 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:44.158 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:10:44.158 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:10:44.158 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.158 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.158 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:44.158 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:10:44.158 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:10:44.158 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.158 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.158 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:44.158 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:10:44.158 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:10:44.158 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.158 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.158 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:44.158 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:10:44.158 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:10:44.158 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.158 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.158 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:44.158 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:10:44.158 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:10:44.158 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.158 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.158 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:44.158 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:10:44.158 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:10:44.158 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.158 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.158 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:44.158 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:10:44.158 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:10:44.158 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.158 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.158 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:44.158 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:10:44.158 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:10:44.158 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.158 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.158 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:44.158 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:10:44.158 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:10:44.158 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.158 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.158 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:44.158 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:44.158 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:44.158 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.158 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.158 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:44.158 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:44.158 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:44.158 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.158 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.158 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:44.158 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:44.158 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:44.158 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.158 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.158 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:44.158 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:44.158 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:44.158 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.158 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.158 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:44.158 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:44.158 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:44.158 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.158 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.158 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:44.158 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:44.158 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:44.159 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.159 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.159 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:44.159 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:44.159 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:44.159 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.159 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.159 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:44.159 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:44.159 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:44.159 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.159 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.159 00:14:58 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:10:44.159 00:14:58 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:10:44.159 00:14:58 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:10:44.159 00:14:58 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:10:44.159 00:14:58 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:10:44.159 00:14:58 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:10:44.159 00:14:58 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:10:44.159 00:14:58 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:10:44.159 00:14:58 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:10:44.159 00:14:58 nvme_fdp -- scripts/common.sh@15 -- # local i 00:10:44.159 00:14:58 nvme_fdp -- scripts/common.sh@18 -- # [[ =~ 0000:00:10.0 ]] 00:10:44.159 00:14:58 nvme_fdp -- scripts/common.sh@22 -- # [[ -z '' ]] 00:10:44.159 00:14:58 nvme_fdp -- scripts/common.sh@24 -- # return 0 00:10:44.159 00:14:58 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:10:44.159 00:14:58 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:10:44.159 00:14:58 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:10:44.159 00:14:58 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:44.159 00:14:58 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:10:44.159 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.159 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.159 00:14:58 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:10:44.159 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:44.159 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.159 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.159 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:10:44.159 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:10:44.159 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:10:44.159 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.159 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.159 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:10:44.159 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:10:44.159 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:10:44.159 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.159 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.159 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:10:44.159 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:10:44.159 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:10:44.159 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.159 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.159 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:10:44.159 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:10:44.159 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:10:44.159 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.159 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.159 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:10:44.159 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:10:44.159 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:10:44.159 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.159 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.159 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:10:44.159 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:10:44.159 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:10:44.159 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.159 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.159 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:10:44.159 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:10:44.159 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:10:44.159 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.159 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.159 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:44.159 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:10:44.159 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:10:44.159 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.159 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.159 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:44.159 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:10:44.159 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:10:44.159 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.159 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.159 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:44.159 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:10:44.159 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:10:44.159 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.159 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.159 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:10:44.159 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:10:44.159 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:10:44.159 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.159 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.159 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:44.159 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:10:44.159 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:10:44.159 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.159 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.159 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:44.159 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:10:44.159 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:10:44.159 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.159 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.159 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:10:44.159 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:10:44.159 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:10:44.159 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.159 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.159 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:10:44.159 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:10:44.159 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:10:44.159 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.159 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.159 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:44.160 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:10:44.160 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:10:44.160 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.160 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.160 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:44.160 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:10:44.160 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:10:44.160 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.160 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.160 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:10:44.160 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:10:44.160 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:10:44.160 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.160 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.160 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:44.160 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:10:44.160 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:10:44.160 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.160 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.160 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:44.160 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:10:44.160 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:10:44.160 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.160 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.160 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:44.160 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:10:44.160 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:10:44.160 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.160 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.160 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:44.160 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:10:44.160 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:10:44.160 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.160 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.160 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:44.160 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:10:44.160 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:10:44.160 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.160 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.160 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:44.160 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:10:44.160 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:10:44.160 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.160 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.160 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:10:44.160 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:10:44.160 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:10:44.160 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.160 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.160 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:44.160 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:10:44.160 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:10:44.160 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.160 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.160 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:44.160 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:10:44.160 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:10:44.160 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.160 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.160 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:44.160 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:10:44.160 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:10:44.160 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.160 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.160 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:44.160 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:10:44.160 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:10:44.160 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.160 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.160 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:44.160 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:10:44.160 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:10:44.160 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.160 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.160 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:44.160 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:10:44.160 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:10:44.160 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.160 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.160 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:44.160 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:10:44.160 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:10:44.160 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.160 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.160 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:44.160 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:10:44.160 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:10:44.160 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.160 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.160 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:10:44.160 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:10:44.160 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:10:44.160 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.160 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.160 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:10:44.160 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:10:44.160 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:10:44.160 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.160 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.160 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:44.160 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:10:44.160 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:10:44.160 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.160 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.160 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:44.160 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:10:44.160 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:10:44.160 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.160 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.160 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:44.160 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:10:44.160 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:10:44.160 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.160 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.160 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:44.160 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:10:44.160 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:10:44.160 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.160 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.160 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:44.160 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:10:44.160 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:10:44.160 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.160 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.160 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:44.160 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:10:44.160 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:10:44.160 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.160 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.160 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:44.160 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:10:44.160 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:10:44.160 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.160 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.160 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:44.160 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:10:44.160 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:10:44.160 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.160 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.160 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:44.160 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:10:44.160 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:10:44.160 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.160 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.160 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:44.160 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:10:44.160 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:10:44.160 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.160 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.160 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:44.160 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:10:44.160 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:10:44.160 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.160 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.160 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:44.161 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:10:44.161 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:10:44.161 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.161 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.161 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:44.161 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:10:44.161 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:10:44.161 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.161 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.161 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:44.161 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:10:44.161 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:10:44.161 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.161 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.161 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:44.161 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:10:44.161 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:10:44.161 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.161 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.161 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:44.161 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:10:44.161 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:10:44.161 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.161 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.161 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:44.161 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:10:44.161 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:10:44.161 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.161 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.161 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:44.161 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:10:44.161 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:10:44.161 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.161 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.161 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:44.161 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:10:44.161 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:10:44.161 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.161 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.161 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:44.161 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:10:44.161 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:10:44.161 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.161 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.161 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:44.161 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:10:44.161 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:10:44.161 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.161 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.161 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:44.161 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:10:44.161 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:10:44.161 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.161 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.161 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:44.161 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:10:44.161 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:10:44.161 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.161 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.161 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:44.161 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:10:44.161 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:10:44.161 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.161 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.161 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:44.161 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:10:44.161 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:10:44.161 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.161 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.161 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:10:44.161 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:10:44.161 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:10:44.161 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.161 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.161 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:10:44.161 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:10:44.161 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:10:44.161 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.161 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.161 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:44.161 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:10:44.161 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:10:44.161 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.161 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.161 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:10:44.161 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:10:44.161 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:10:44.161 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.161 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.161 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:10:44.161 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:10:44.161 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:10:44.161 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.161 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.161 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:44.161 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:10:44.161 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:10:44.161 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.161 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.161 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:44.161 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:10:44.161 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:10:44.161 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.161 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.161 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:44.161 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:10:44.161 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:10:44.161 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.161 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.161 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:44.161 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:10:44.161 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:10:44.161 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.161 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.161 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:44.161 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:10:44.161 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:10:44.161 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.161 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.161 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:44.161 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:10:44.161 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:10:44.161 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.161 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.161 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:44.161 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:10:44.161 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:10:44.161 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.161 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.161 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:44.161 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:10:44.161 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:10:44.161 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.161 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.161 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:44.161 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:10:44.161 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:10:44.161 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.161 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.161 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:10:44.161 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:10:44.161 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:10:44.161 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.161 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.161 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:44.161 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:10:44.161 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:10:44.161 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.161 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.161 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:44.161 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:10:44.161 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:10:44.161 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.161 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.161 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:44.161 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:10:44.161 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:10:44.162 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.162 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.162 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:10:44.162 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:10:44.162 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:10:44.162 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.162 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.162 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:44.162 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:10:44.162 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:10:44.162 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.162 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.162 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:44.162 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:10:44.162 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:10:44.162 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.162 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.162 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:44.162 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:10:44.162 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:10:44.162 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.162 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.162 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:44.162 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:10:44.162 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:10:44.162 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.162 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.162 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:44.162 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:10:44.162 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:10:44.162 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.162 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.162 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:44.162 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:10:44.162 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:10:44.162 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.162 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.162 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:10:44.162 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:10:44.162 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:10:44.162 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.162 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.162 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:10:44.162 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:10:44.162 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:10:44.162 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.162 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.162 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:10:44.162 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:10:44.162 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:10:44.162 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.162 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.162 00:14:58 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:10:44.162 00:14:58 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:10:44.162 00:14:58 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:10:44.162 00:14:58 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:10:44.162 00:14:58 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:10:44.162 00:14:58 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:10:44.162 00:14:58 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:44.162 00:14:58 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:10:44.162 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.162 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.162 00:14:58 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:10:44.162 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:44.162 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.162 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.162 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:10:44.162 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:10:44.162 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:10:44.162 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.162 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.162 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:10:44.162 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:10:44.162 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:10:44.162 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.162 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.162 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:10:44.162 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:10:44.162 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:10:44.162 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.162 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.162 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:44.162 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:10:44.162 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:10:44.162 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.162 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.162 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:44.162 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:10:44.162 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:10:44.162 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.162 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.162 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:44.162 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:10:44.162 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:10:44.162 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.162 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.162 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:44.162 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:10:44.162 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:10:44.162 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.162 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.162 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:44.162 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:10:44.162 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:10:44.162 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.162 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.162 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:44.162 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:10:44.162 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:10:44.162 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.162 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.162 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:44.162 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:10:44.162 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:10:44.162 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.162 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.162 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:44.162 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:10:44.162 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:10:44.162 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.162 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.162 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:44.162 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:10:44.162 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:10:44.162 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.162 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.162 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:44.162 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:10:44.162 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:10:44.162 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.162 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.162 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:44.162 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:10:44.162 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:10:44.162 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.162 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.162 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:44.162 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:10:44.162 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:10:44.162 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.162 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.162 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:44.162 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:10:44.162 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:10:44.162 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.162 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.162 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:44.162 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:10:44.162 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:10:44.162 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.162 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.163 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:44.163 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:10:44.163 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:10:44.163 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.163 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.163 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:44.163 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:10:44.163 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:10:44.163 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.163 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.163 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:44.163 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:10:44.163 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:10:44.163 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.163 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.163 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:44.163 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:10:44.163 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:10:44.163 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.163 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.163 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:44.163 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:10:44.163 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:10:44.163 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.163 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.163 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:44.163 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:10:44.163 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:10:44.163 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.163 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.163 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:44.163 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:10:44.163 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:10:44.163 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.163 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.163 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:44.163 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:10:44.163 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:10:44.163 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.163 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.163 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:44.163 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:10:44.163 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:10:44.163 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.163 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.163 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:44.163 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:10:44.163 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:10:44.163 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.163 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.163 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:44.163 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:10:44.163 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:10:44.163 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.163 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.163 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:44.163 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:10:44.163 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:10:44.163 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.163 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.163 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:44.163 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:10:44.163 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:10:44.163 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.163 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.163 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:44.163 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:10:44.163 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:10:44.163 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.163 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.163 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:44.163 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:10:44.163 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:10:44.163 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.163 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.163 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:44.163 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:10:44.163 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:10:44.163 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.163 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.163 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:44.163 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:10:44.163 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:10:44.163 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.163 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.163 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:44.163 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:10:44.163 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:10:44.163 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.163 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.163 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:44.163 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:10:44.163 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:10:44.163 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.163 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.163 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:44.163 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:44.163 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:44.163 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.163 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.163 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:44.163 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:44.163 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:44.163 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.163 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.163 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:44.163 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:44.163 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:44.163 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.163 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.163 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:44.163 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:44.163 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:44.163 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.163 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.163 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:10:44.163 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:10:44.163 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:10:44.163 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.163 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.163 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:44.163 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:44.163 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:44.163 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.163 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.163 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:44.163 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:44.163 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:44.163 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.163 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.163 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:10:44.163 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:10:44.163 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:10:44.163 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.163 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.163 00:14:58 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:10:44.163 00:14:58 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:10:44.163 00:14:58 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:10:44.163 00:14:58 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:10:44.163 00:14:58 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:10:44.163 00:14:58 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:10:44.163 00:14:58 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:10:44.163 00:14:58 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:10:44.163 00:14:58 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:10:44.163 00:14:58 nvme_fdp -- scripts/common.sh@15 -- # local i 00:10:44.163 00:14:58 nvme_fdp -- scripts/common.sh@18 -- # [[ =~ 0000:00:12.0 ]] 00:10:44.163 00:14:58 nvme_fdp -- scripts/common.sh@22 -- # [[ -z '' ]] 00:10:44.163 00:14:58 nvme_fdp -- scripts/common.sh@24 -- # return 0 00:10:44.164 00:14:58 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:10:44.164 00:14:58 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:10:44.164 00:14:58 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:10:44.164 00:14:58 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:44.164 00:14:58 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:10:44.164 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.164 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.164 00:14:58 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:10:44.164 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:44.164 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.164 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.164 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:10:44.164 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:10:44.164 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:10:44.164 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.164 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.164 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:10:44.164 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:10:44.164 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:10:44.164 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.164 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.164 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:10:44.164 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:10:44.164 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:10:44.164 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.164 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.164 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:10:44.164 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:10:44.164 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:10:44.164 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.164 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.164 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:10:44.164 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:10:44.164 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:10:44.164 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.164 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.164 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:10:44.164 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:10:44.164 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:10:44.164 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.164 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.164 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:10:44.164 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:10:44.164 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:10:44.164 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.164 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.164 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:44.164 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:10:44.164 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:10:44.164 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.164 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.164 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:44.164 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:10:44.164 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:10:44.164 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.164 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.164 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:44.164 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:10:44.164 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:10:44.164 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.164 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.164 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:10:44.164 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:10:44.164 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:10:44.164 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.164 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.164 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:44.164 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:10:44.164 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:10:44.164 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.164 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.164 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:44.164 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:10:44.164 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:10:44.164 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.164 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.164 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:10:44.164 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:10:44.164 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:10:44.164 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.164 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.164 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:10:44.164 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:10:44.164 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:10:44.164 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.164 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.164 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:44.164 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:10:44.164 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:10:44.164 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.164 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.164 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:44.164 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:10:44.164 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:10:44.164 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.164 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.164 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:10:44.164 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:10:44.164 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:10:44.164 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.164 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.164 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:44.164 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:10:44.164 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:10:44.164 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.164 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.164 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:44.164 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:10:44.164 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:10:44.164 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.164 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.164 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:44.164 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:10:44.164 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:10:44.164 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.164 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.164 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:44.164 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:10:44.164 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:10:44.164 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.164 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.164 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:44.164 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:10:44.164 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:10:44.164 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.164 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.165 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:44.165 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:10:44.165 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:10:44.165 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.165 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.165 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:10:44.165 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:10:44.165 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:10:44.165 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.165 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.165 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:44.165 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:10:44.165 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:10:44.165 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.165 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.165 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:44.165 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:10:44.165 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:10:44.165 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.165 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.165 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:44.165 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:10:44.165 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:10:44.165 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.165 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.165 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:44.165 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:10:44.165 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:10:44.165 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.165 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.165 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:44.165 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:10:44.165 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:10:44.165 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.165 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.165 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:44.165 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:10:44.165 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:10:44.165 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.165 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.165 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:44.165 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:10:44.165 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:10:44.165 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.165 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.165 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:44.165 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:10:44.165 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:10:44.165 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.165 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.165 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:10:44.165 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:10:44.165 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:10:44.165 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.165 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.165 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:10:44.165 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:10:44.165 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:10:44.165 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.165 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.165 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:44.165 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:10:44.165 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:10:44.165 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.165 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.165 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:44.165 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:10:44.165 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:10:44.165 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.165 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.165 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:44.165 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:10:44.165 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:10:44.165 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.165 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.165 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:44.165 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:10:44.165 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:10:44.165 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.165 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.165 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:44.165 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:10:44.165 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:10:44.165 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.165 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.165 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:44.165 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:10:44.165 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:10:44.165 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.165 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.165 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:44.165 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:10:44.165 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:10:44.165 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.165 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.165 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:44.165 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:10:44.165 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:10:44.165 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.165 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.165 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:44.165 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:10:44.165 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:10:44.165 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.165 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.165 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:44.165 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:10:44.165 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:10:44.165 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.165 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.165 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:44.165 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:10:44.165 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:10:44.165 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.165 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.165 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:44.165 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:10:44.165 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:10:44.165 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.165 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.165 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:44.165 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:10:44.165 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:10:44.165 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.165 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.165 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:44.165 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:10:44.165 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:10:44.165 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.165 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.165 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:44.165 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:10:44.165 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:10:44.165 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.165 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.165 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:44.165 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:10:44.165 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:10:44.165 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.165 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.165 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:44.165 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:10:44.165 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:10:44.165 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.165 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.165 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:44.165 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:10:44.165 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:10:44.165 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.165 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.165 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:44.165 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:10:44.165 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:10:44.165 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.165 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.165 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:44.166 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:10:44.166 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:10:44.166 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.166 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.166 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:44.166 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:10:44.166 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:10:44.166 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.166 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.166 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:44.166 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:10:44.166 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:10:44.166 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.166 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.166 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:44.166 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:10:44.166 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:10:44.166 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.166 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.166 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:44.166 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:10:44.166 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:10:44.166 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.166 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.166 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:44.166 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:10:44.166 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:10:44.166 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.166 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.166 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:10:44.166 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:10:44.166 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:10:44.166 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.166 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.166 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:10:44.166 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:10:44.166 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:10:44.166 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.166 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.166 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:44.166 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:10:44.166 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:10:44.166 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.166 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.166 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:10:44.166 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:10:44.166 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:10:44.166 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.166 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.166 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:10:44.166 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:10:44.166 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:10:44.166 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.166 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.166 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:44.166 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:10:44.166 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:10:44.166 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.166 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.166 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:44.166 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:10:44.166 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:10:44.166 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.166 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.166 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:44.166 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:10:44.166 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:10:44.166 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.166 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.166 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:44.166 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:10:44.166 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:10:44.166 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.166 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.166 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:44.166 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:10:44.166 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:10:44.166 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.166 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.166 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:44.166 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:10:44.166 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:10:44.166 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.166 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.166 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:44.166 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:10:44.166 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:10:44.166 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.166 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.166 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:44.166 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:10:44.166 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:10:44.166 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.166 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.166 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:44.166 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:10:44.166 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:10:44.166 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.166 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.166 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:10:44.166 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:10:44.166 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:10:44.166 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.166 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.166 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:44.166 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:10:44.166 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:10:44.166 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.166 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.166 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:44.166 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:10:44.166 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:10:44.166 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.166 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.166 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:44.166 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:10:44.166 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:10:44.166 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.166 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.166 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:10:44.166 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:10:44.166 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:10:44.166 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.166 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.166 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:44.166 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:10:44.166 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:10:44.166 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.166 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.166 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:44.166 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:10:44.166 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:10:44.166 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.166 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.166 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:44.166 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:10:44.166 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:10:44.166 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.166 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.166 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:44.166 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:10:44.166 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:10:44.166 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.166 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.166 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:44.166 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:10:44.166 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:10:44.166 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.166 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.166 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:44.166 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:10:44.166 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:10:44.166 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.166 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.166 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:10:44.167 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:10:44.167 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:10:44.167 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.167 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.167 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:10:44.167 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:10:44.167 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:10:44.167 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.167 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.167 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:10:44.167 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:10:44.167 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:10:44.167 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.167 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.167 00:14:58 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:10:44.167 00:14:58 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:10:44.167 00:14:58 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:10:44.167 00:14:58 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:10:44.167 00:14:58 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:10:44.167 00:14:58 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:10:44.167 00:14:58 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:44.167 00:14:58 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:10:44.167 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.167 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.167 00:14:58 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:10:44.167 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:44.167 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.167 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.167 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:44.167 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:10:44.167 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:10:44.167 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.167 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.167 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:44.167 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:10:44.167 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:10:44.167 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.167 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.167 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:44.167 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:10:44.167 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:10:44.167 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.167 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.167 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:44.167 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:10:44.167 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:10:44.167 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.167 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.167 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:44.167 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:10:44.167 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:10:44.167 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.167 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.167 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:44.167 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:10:44.167 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:10:44.167 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.167 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.167 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:44.167 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:10:44.167 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:10:44.167 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.167 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.167 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:44.167 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:10:44.167 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:10:44.167 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.167 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.167 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:44.167 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:10:44.167 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:10:44.167 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.167 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.167 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:44.167 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:10:44.167 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:10:44.167 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.167 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.167 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:44.167 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:10:44.167 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:10:44.167 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.167 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.167 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:44.167 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:10:44.167 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:10:44.167 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.167 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.167 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:44.167 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:10:44.167 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:10:44.167 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.167 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.167 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:44.167 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:10:44.167 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:10:44.167 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.167 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.167 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:44.167 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:10:44.167 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:10:44.167 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.167 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.167 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:44.167 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:10:44.167 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:10:44.167 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.167 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.167 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:44.167 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:10:44.167 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:10:44.167 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.167 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.167 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:44.167 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:10:44.167 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:10:44.167 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.167 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.167 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:44.167 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:10:44.167 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:10:44.167 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.167 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.167 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:44.167 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:10:44.167 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:10:44.167 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.167 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.167 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:44.167 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:10:44.167 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:10:44.167 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.167 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.167 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:44.167 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:10:44.167 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:10:44.167 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.168 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.168 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:44.168 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:10:44.168 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:10:44.168 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.168 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.168 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:44.168 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:10:44.168 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:10:44.168 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.168 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.168 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:44.168 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:10:44.168 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:10:44.168 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.168 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.168 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:44.168 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:10:44.168 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:10:44.168 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.168 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.168 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:44.168 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:10:44.168 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:10:44.168 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.168 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.168 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:44.168 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:10:44.168 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:10:44.168 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.168 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.168 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:44.168 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:10:44.168 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:10:44.168 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.168 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.168 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:44.168 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:10:44.168 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:10:44.168 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.168 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.168 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:44.168 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:10:44.168 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:10:44.168 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.168 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.168 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:44.168 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:10:44.168 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:10:44.168 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.168 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.168 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:44.168 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:10:44.168 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:10:44.168 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.168 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.168 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:44.168 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:10:44.168 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:10:44.168 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.168 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.168 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:44.168 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:10:44.168 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:10:44.168 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.168 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.168 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:44.168 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:10:44.168 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:10:44.168 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.168 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.168 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:44.168 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:44.168 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:44.168 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.168 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.168 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:44.168 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:44.168 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:44.168 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.168 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.168 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:44.168 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:44.168 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:44.168 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.168 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.168 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:44.168 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:44.168 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:44.168 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.168 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.168 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:44.168 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:44.168 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:44.168 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.168 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.168 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:44.168 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:44.168 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:44.168 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.168 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.168 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:44.168 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:44.168 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:44.168 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.168 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.168 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:44.168 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:44.168 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:44.168 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.168 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.168 00:14:58 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:10:44.168 00:14:58 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:10:44.168 00:14:58 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:10:44.168 00:14:58 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:10:44.168 00:14:58 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:10:44.168 00:14:58 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:10:44.168 00:14:58 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:44.168 00:14:58 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:10:44.168 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.168 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.168 00:14:58 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:10:44.168 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:44.168 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.168 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.168 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:44.168 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:10:44.168 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:10:44.168 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.168 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.168 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:44.168 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:10:44.168 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:10:44.168 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.168 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.168 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:44.168 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:10:44.168 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:10:44.168 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.169 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.169 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:44.169 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:10:44.169 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:10:44.169 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.169 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.169 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:44.169 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:10:44.169 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:10:44.169 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.169 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.169 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:44.169 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:10:44.169 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:10:44.169 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.169 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.169 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:44.169 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:10:44.169 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:10:44.169 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.169 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.169 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:44.169 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:10:44.169 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:10:44.169 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.169 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.169 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:44.169 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:10:44.169 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:10:44.169 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.169 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.169 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:44.169 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:10:44.169 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:10:44.169 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.169 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.169 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:44.169 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:10:44.169 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:10:44.169 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.169 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.169 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:44.169 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:10:44.169 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:10:44.169 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.169 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.169 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:44.169 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:10:44.169 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:10:44.169 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.169 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.169 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:44.169 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:10:44.169 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:10:44.169 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.169 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.169 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:44.169 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:10:44.169 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:10:44.169 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.169 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.169 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:44.169 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:10:44.169 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:10:44.169 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.169 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.169 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:44.169 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:10:44.169 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:10:44.169 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.169 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.169 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:44.169 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:10:44.169 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:10:44.169 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.169 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.169 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:44.169 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:10:44.169 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:10:44.169 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.169 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.169 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:44.169 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:10:44.169 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:10:44.169 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.169 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.169 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:44.169 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:10:44.169 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:10:44.169 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.169 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.169 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:44.169 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:10:44.169 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:10:44.169 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.169 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.169 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:44.169 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:10:44.169 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:10:44.169 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.169 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.169 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:44.169 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:10:44.169 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:10:44.169 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.169 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.169 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:44.169 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:10:44.169 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:10:44.169 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.169 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.169 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:44.169 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:10:44.169 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:10:44.169 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.169 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.169 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:44.169 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:10:44.169 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:10:44.170 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.170 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.170 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:44.170 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:10:44.170 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:10:44.170 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.170 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.170 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:44.170 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:10:44.170 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:10:44.170 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.170 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.170 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:44.170 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:10:44.170 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:10:44.170 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.170 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.170 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:44.170 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:10:44.170 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:10:44.170 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.170 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.170 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:44.170 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:10:44.170 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:10:44.170 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.170 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.170 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:44.170 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:10:44.170 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:10:44.170 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.170 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.170 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:44.170 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:10:44.170 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:10:44.170 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.170 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.170 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:44.170 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:10:44.170 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:10:44.170 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.170 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.170 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:44.170 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:10:44.170 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:10:44.170 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.170 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.170 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:44.170 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:44.170 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:44.170 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.170 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.170 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:44.170 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:44.170 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:44.170 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.170 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.170 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:44.170 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:44.170 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:44.170 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.170 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.170 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:44.170 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:44.170 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:44.170 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.170 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.170 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:44.170 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:44.170 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:44.170 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.170 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.170 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:44.170 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:44.170 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:44.170 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.170 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.170 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:44.170 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:44.170 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:44.170 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.170 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.170 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:44.170 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:44.170 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:44.170 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.170 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.170 00:14:58 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:10:44.170 00:14:58 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:10:44.170 00:14:58 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:10:44.170 00:14:58 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:10:44.170 00:14:58 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:10:44.170 00:14:58 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:10:44.170 00:14:58 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:44.170 00:14:58 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:10:44.170 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.170 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.170 00:14:58 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:10:44.170 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:44.170 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.170 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.170 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:44.170 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:10:44.170 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:10:44.170 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.170 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.170 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:44.170 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:10:44.170 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:10:44.170 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.170 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.170 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:44.170 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:10:44.170 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:10:44.170 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.170 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.170 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:44.170 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:10:44.170 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:10:44.170 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.170 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.170 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:44.170 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:10:44.170 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:10:44.170 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.170 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.170 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:44.170 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:10:44.170 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:10:44.170 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.170 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.170 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:44.170 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:10:44.170 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:10:44.170 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.170 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.170 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:44.170 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:10:44.170 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:10:44.170 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.170 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.170 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:44.170 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:10:44.170 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:10:44.170 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.170 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.170 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:44.171 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:10:44.171 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:10:44.171 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.171 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.171 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:44.171 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:10:44.171 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:10:44.171 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.171 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.171 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:44.171 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:10:44.171 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:10:44.171 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.171 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.171 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:44.171 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:10:44.171 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:10:44.171 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.171 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.171 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:44.171 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:10:44.171 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:10:44.171 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.171 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.171 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:44.171 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:10:44.171 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:10:44.171 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.171 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.171 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:44.171 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:10:44.171 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:10:44.171 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.171 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.171 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:44.171 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:10:44.171 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:10:44.171 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.171 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.171 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:44.171 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:10:44.171 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:10:44.171 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.171 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.171 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:44.171 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:10:44.171 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:10:44.171 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.171 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.171 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:44.171 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:10:44.171 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:10:44.171 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.171 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.171 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:44.171 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:10:44.171 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:10:44.171 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.171 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.171 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:44.171 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:10:44.171 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:10:44.171 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.171 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.171 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:44.171 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:10:44.171 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:10:44.171 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.171 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.171 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:44.171 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:10:44.171 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:10:44.171 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.171 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.171 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:44.171 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:10:44.171 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:10:44.171 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.171 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.171 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:44.171 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:10:44.171 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:10:44.171 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.171 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.171 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:44.171 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:10:44.171 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:10:44.171 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.171 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.171 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:44.171 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:10:44.171 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:10:44.171 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.171 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.171 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:44.171 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:10:44.171 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:10:44.171 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.171 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.171 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:44.171 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:10:44.171 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:10:44.171 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.171 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.171 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:44.171 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:10:44.171 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:10:44.171 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.171 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.171 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:44.171 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:10:44.171 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:10:44.171 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.171 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.171 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:44.171 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:10:44.171 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:10:44.171 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.171 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.171 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:44.171 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:10:44.171 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:10:44.171 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.171 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.171 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:44.171 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:10:44.171 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:10:44.171 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.171 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.171 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:44.171 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:10:44.171 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:10:44.171 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.171 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.171 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:44.171 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:44.171 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:44.171 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.171 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.171 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:44.172 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:44.172 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:44.172 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.172 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.172 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:44.172 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:44.172 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:44.172 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.172 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.172 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:44.172 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:44.172 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:44.172 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.172 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.172 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:44.172 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:44.172 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:44.172 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.172 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.172 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:44.172 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:44.172 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:44.172 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.172 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.172 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:44.172 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:44.172 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:44.172 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.172 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.172 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:44.172 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:44.172 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:44.172 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.172 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.172 00:14:58 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:10:44.172 00:14:58 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:10:44.172 00:14:58 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:10:44.172 00:14:58 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:10:44.172 00:14:58 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:10:44.172 00:14:58 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:10:44.172 00:14:58 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:10:44.172 00:14:58 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:10:44.172 00:14:58 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:10:44.172 00:14:58 nvme_fdp -- scripts/common.sh@15 -- # local i 00:10:44.172 00:14:58 nvme_fdp -- scripts/common.sh@18 -- # [[ =~ 0000:00:13.0 ]] 00:10:44.172 00:14:58 nvme_fdp -- scripts/common.sh@22 -- # [[ -z '' ]] 00:10:44.172 00:14:58 nvme_fdp -- scripts/common.sh@24 -- # return 0 00:10:44.172 00:14:58 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:10:44.172 00:14:58 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:10:44.172 00:14:58 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:10:44.172 00:14:58 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:44.172 00:14:58 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:10:44.172 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.172 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.172 00:14:58 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:10:44.172 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:44.172 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.172 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.172 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:10:44.172 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:10:44.172 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:10:44.172 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.172 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.172 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:10:44.172 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:10:44.172 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:10:44.172 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.172 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.172 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:10:44.172 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:10:44.172 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:10:44.172 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.172 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.172 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:10:44.172 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:10:44.172 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:10:44.172 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.172 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.172 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:10:44.172 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:10:44.172 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:10:44.172 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.172 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.172 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:10:44.172 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:10:44.172 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:10:44.172 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.172 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.172 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:10:44.172 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:10:44.172 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:10:44.172 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.172 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.172 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:10:44.172 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:10:44.172 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:10:44.172 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.172 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.172 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:44.172 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:10:44.172 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:10:44.172 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.172 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.172 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:44.172 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:10:44.172 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:10:44.172 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.172 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.172 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:10:44.172 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:10:44.172 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:10:44.172 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.172 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.172 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:44.172 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:10:44.172 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:10:44.172 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.172 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.172 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:44.172 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:10:44.172 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:10:44.172 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.172 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.172 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:10:44.172 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:10:44.172 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:10:44.172 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.172 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.172 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:10:44.172 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:10:44.172 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:10:44.172 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.172 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.172 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:44.172 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:10:44.172 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:10:44.172 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.172 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.172 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:44.172 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:10:44.172 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:10:44.172 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.172 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.172 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:10:44.172 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:10:44.172 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:10:44.172 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.173 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.173 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:44.173 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:10:44.173 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:10:44.173 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.173 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.173 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:44.173 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:10:44.173 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:10:44.173 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.173 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.173 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:44.173 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:10:44.173 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:10:44.173 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.173 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.173 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:44.173 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:10:44.173 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:10:44.173 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.173 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.173 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:44.173 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:10:44.173 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:10:44.173 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.173 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.173 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:44.173 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:10:44.173 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:10:44.173 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.173 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.173 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:10:44.173 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:10:44.173 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:10:44.173 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.173 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.173 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:44.173 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:10:44.173 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:10:44.173 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.173 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.173 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:44.173 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:10:44.173 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:10:44.173 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.173 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.173 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:44.173 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:10:44.173 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:10:44.173 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.173 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.173 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:44.173 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:10:44.173 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:10:44.173 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.173 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.173 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:44.173 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:10:44.173 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:10:44.173 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.173 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.173 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:44.173 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:10:44.173 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:10:44.173 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.173 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.173 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:44.173 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:10:44.173 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:10:44.173 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.173 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.173 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:44.173 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:10:44.173 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:10:44.173 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.173 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.173 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:10:44.173 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:10:44.173 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:10:44.173 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.173 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.173 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:10:44.173 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:10:44.173 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:10:44.173 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.173 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.173 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:44.173 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:10:44.173 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:10:44.173 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.173 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.173 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:44.173 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:10:44.173 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:10:44.173 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.173 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.173 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:44.173 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:10:44.173 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:10:44.173 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.173 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.173 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:44.173 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:10:44.173 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:10:44.173 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.173 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.173 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:44.173 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:10:44.173 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:10:44.173 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.173 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.173 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:44.173 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:10:44.173 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:10:44.173 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.173 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.173 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:44.173 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:10:44.173 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:10:44.173 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.173 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.173 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:44.173 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:10:44.173 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:10:44.173 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.173 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.173 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:44.173 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:10:44.173 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:10:44.173 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.173 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.173 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:44.173 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:10:44.173 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:10:44.173 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.173 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.173 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:44.173 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:10:44.173 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:10:44.173 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.173 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.173 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:44.173 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:10:44.174 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:10:44.174 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.174 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.174 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:44.174 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:10:44.174 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:10:44.174 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.174 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.174 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:44.174 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:10:44.174 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:10:44.174 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.174 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.174 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:44.174 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:10:44.174 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:10:44.174 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.174 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.174 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:44.174 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:10:44.174 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:10:44.174 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.174 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.174 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:44.174 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:10:44.174 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:10:44.174 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.174 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.174 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:44.174 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:10:44.174 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:10:44.174 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.174 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.174 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:44.174 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:10:44.174 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:10:44.174 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.174 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.174 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:44.174 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:10:44.174 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:10:44.174 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.174 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.174 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:44.174 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:10:44.174 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:10:44.174 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.174 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.174 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:44.174 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:10:44.174 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:10:44.174 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.174 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.174 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:44.174 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:10:44.174 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:10:44.174 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.174 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.174 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:44.174 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:10:44.174 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:10:44.174 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.174 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.174 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:44.174 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:10:44.174 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:10:44.174 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.174 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.174 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:10:44.174 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:10:44.174 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:10:44.174 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.174 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.174 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:10:44.174 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:10:44.174 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:10:44.174 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.174 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.174 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:44.174 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:10:44.174 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:10:44.174 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.174 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.174 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:10:44.174 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:10:44.174 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:10:44.174 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.174 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.174 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:10:44.174 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:10:44.174 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:10:44.174 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.174 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.174 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:44.174 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:10:44.174 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:10:44.174 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.174 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.174 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:44.174 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:10:44.174 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:10:44.174 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.174 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.174 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:44.174 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:10:44.174 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:10:44.174 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.174 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.174 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:44.174 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:10:44.174 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:10:44.174 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.174 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.174 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:44.174 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:10:44.174 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:10:44.174 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.174 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.174 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:44.174 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:10:44.174 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:10:44.174 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.174 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.174 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:44.174 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:10:44.174 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:10:44.174 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.174 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.174 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:44.174 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:10:44.175 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:10:44.175 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.175 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.175 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:44.175 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:10:44.175 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:10:44.175 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.175 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.175 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:10:44.175 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:10:44.175 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:10:44.175 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.175 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.175 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:44.175 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:10:44.175 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:10:44.175 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.175 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.175 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:44.175 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:10:44.175 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:10:44.175 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.175 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.175 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:44.175 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:10:44.175 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:10:44.175 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.175 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.175 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:10:44.175 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:10:44.175 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:10:44.175 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.175 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.175 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:44.175 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:10:44.175 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:10:44.175 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.175 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.175 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:44.175 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:10:44.175 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:10:44.175 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.175 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.175 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:44.175 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:10:44.175 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:10:44.175 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.175 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.175 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:44.175 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:10:44.175 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:10:44.175 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.175 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.175 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:44.175 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:10:44.175 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:10:44.175 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.175 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.175 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:44.175 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:10:44.175 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:10:44.175 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.175 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.175 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:10:44.175 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:10:44.175 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:10:44.175 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.175 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.175 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:10:44.175 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:10:44.175 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:10:44.175 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.175 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.175 00:14:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:10:44.175 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:10:44.175 00:14:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:10:44.175 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:44.175 00:14:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:44.175 00:14:58 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:10:44.175 00:14:58 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:10:44.175 00:14:58 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:10:44.175 00:14:58 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:10:44.175 00:14:58 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:10:44.175 00:14:58 nvme_fdp -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:10:44.175 00:14:58 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # get_ctrl_with_feature fdp 00:10:44.175 00:14:58 nvme_fdp -- nvme/functions.sh@202 -- # local _ctrls feature=fdp 00:10:44.175 00:14:58 nvme_fdp -- nvme/functions.sh@204 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:10:44.175 00:14:58 nvme_fdp -- nvme/functions.sh@204 -- # get_ctrls_with_feature fdp 00:10:44.175 00:14:58 nvme_fdp -- nvme/functions.sh@190 -- # (( 4 == 0 )) 00:10:44.175 00:14:58 nvme_fdp -- nvme/functions.sh@192 -- # local ctrl feature=fdp 00:10:44.175 00:14:58 nvme_fdp -- nvme/functions.sh@194 -- # type -t ctrl_has_fdp 00:10:44.175 00:14:58 nvme_fdp -- nvme/functions.sh@194 -- # [[ function == function ]] 00:10:44.175 00:14:58 nvme_fdp -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:10:44.175 00:14:58 nvme_fdp -- nvme/functions.sh@197 -- # ctrl_has_fdp nvme1 00:10:44.175 00:14:58 nvme_fdp -- nvme/functions.sh@174 -- # local ctrl=nvme1 ctratt 00:10:44.175 00:14:58 nvme_fdp -- nvme/functions.sh@176 -- # get_ctratt nvme1 00:10:44.175 00:14:58 nvme_fdp -- nvme/functions.sh@164 -- # local ctrl=nvme1 00:10:44.175 00:14:58 nvme_fdp -- nvme/functions.sh@165 -- # get_nvme_ctrl_feature nvme1 ctratt 00:10:44.175 00:14:58 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=ctratt 00:10:44.175 00:14:58 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:10:44.175 00:14:58 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:10:44.175 00:14:58 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:10:44.175 00:14:58 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:10:44.175 00:14:58 nvme_fdp -- nvme/functions.sh@176 -- # ctratt=0x8000 00:10:44.175 00:14:58 nvme_fdp -- nvme/functions.sh@178 -- # (( ctratt & 1 << 19 )) 00:10:44.175 00:14:58 nvme_fdp -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:10:44.175 00:14:58 nvme_fdp -- nvme/functions.sh@197 -- # ctrl_has_fdp nvme0 00:10:44.175 00:14:58 nvme_fdp -- nvme/functions.sh@174 -- # local ctrl=nvme0 ctratt 00:10:44.175 00:14:58 nvme_fdp -- nvme/functions.sh@176 -- # get_ctratt nvme0 00:10:44.175 00:14:58 nvme_fdp -- nvme/functions.sh@164 -- # local ctrl=nvme0 00:10:44.175 00:14:58 nvme_fdp -- nvme/functions.sh@165 -- # get_nvme_ctrl_feature nvme0 ctratt 00:10:44.175 00:14:58 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=ctratt 00:10:44.175 00:14:58 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:10:44.175 00:14:58 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:10:44.175 00:14:58 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:10:44.175 00:14:58 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:10:44.175 00:14:58 nvme_fdp -- nvme/functions.sh@176 -- # ctratt=0x8000 00:10:44.175 00:14:58 nvme_fdp -- nvme/functions.sh@178 -- # (( ctratt & 1 << 19 )) 00:10:44.175 00:14:58 nvme_fdp -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:10:44.175 00:14:58 nvme_fdp -- nvme/functions.sh@197 -- # ctrl_has_fdp nvme3 00:10:44.175 00:14:58 nvme_fdp -- nvme/functions.sh@174 -- # local ctrl=nvme3 ctratt 00:10:44.175 00:14:58 nvme_fdp -- nvme/functions.sh@176 -- # get_ctratt nvme3 00:10:44.175 00:14:58 nvme_fdp -- nvme/functions.sh@164 -- # local ctrl=nvme3 00:10:44.175 00:14:58 nvme_fdp -- nvme/functions.sh@165 -- # get_nvme_ctrl_feature nvme3 ctratt 00:10:44.175 00:14:58 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=ctratt 00:10:44.175 00:14:58 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:10:44.175 00:14:58 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:10:44.175 00:14:58 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x88010 ]] 00:10:44.175 00:14:58 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x88010 00:10:44.175 00:14:58 nvme_fdp -- nvme/functions.sh@176 -- # ctratt=0x88010 00:10:44.175 00:14:58 nvme_fdp -- nvme/functions.sh@178 -- # (( ctratt & 1 << 19 )) 00:10:44.175 00:14:58 nvme_fdp -- nvme/functions.sh@197 -- # echo nvme3 00:10:44.175 00:14:58 nvme_fdp -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:10:44.175 00:14:58 nvme_fdp -- nvme/functions.sh@197 -- # ctrl_has_fdp nvme2 00:10:44.175 00:14:58 nvme_fdp -- nvme/functions.sh@174 -- # local ctrl=nvme2 ctratt 00:10:44.175 00:14:58 nvme_fdp -- nvme/functions.sh@176 -- # get_ctratt nvme2 00:10:44.175 00:14:58 nvme_fdp -- nvme/functions.sh@164 -- # local ctrl=nvme2 00:10:44.175 00:14:58 nvme_fdp -- nvme/functions.sh@165 -- # get_nvme_ctrl_feature nvme2 ctratt 00:10:44.175 00:14:58 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=ctratt 00:10:44.175 00:14:58 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:10:44.175 00:14:58 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:10:44.175 00:14:58 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:10:44.175 00:14:58 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:10:44.175 00:14:58 nvme_fdp -- nvme/functions.sh@176 -- # ctratt=0x8000 00:10:44.175 00:14:58 nvme_fdp -- nvme/functions.sh@178 -- # (( ctratt & 1 << 19 )) 00:10:44.175 00:14:58 nvme_fdp -- nvme/functions.sh@205 -- # (( 1 > 0 )) 00:10:44.176 00:14:58 nvme_fdp -- nvme/functions.sh@206 -- # echo nvme3 00:10:44.176 00:14:58 nvme_fdp -- nvme/functions.sh@207 -- # return 0 00:10:44.176 00:14:58 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # ctrl=nvme3 00:10:44.176 00:14:58 nvme_fdp -- nvme/nvme_fdp.sh@14 -- # bdf=0000:00:13.0 00:10:44.176 00:14:58 nvme_fdp -- nvme/nvme_fdp.sh@16 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:10:45.113 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:45.680 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:10:45.680 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:10:45.680 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:10:45.680 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:10:45.938 00:15:00 nvme_fdp -- nvme/nvme_fdp.sh@18 -- # run_test nvme_flexible_data_placement /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:10:45.938 00:15:00 nvme_fdp -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:10:45.938 00:15:00 nvme_fdp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:10:45.938 00:15:00 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:10:45.938 ************************************ 00:10:45.938 START TEST nvme_flexible_data_placement 00:10:45.938 ************************************ 00:10:45.938 00:15:00 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:10:46.198 Initializing NVMe Controllers 00:10:46.198 Attaching to 0000:00:13.0 00:10:46.198 Controller supports FDP Attached to 0000:00:13.0 00:10:46.198 Namespace ID: 1 Endurance Group ID: 1 00:10:46.198 Initialization complete. 00:10:46.198 00:10:46.198 ================================== 00:10:46.198 == FDP tests for Namespace: #01 == 00:10:46.198 ================================== 00:10:46.198 00:10:46.198 Get Feature: FDP: 00:10:46.198 ================= 00:10:46.198 Enabled: Yes 00:10:46.198 FDP configuration Index: 0 00:10:46.198 00:10:46.198 FDP configurations log page 00:10:46.198 =========================== 00:10:46.198 Number of FDP configurations: 1 00:10:46.198 Version: 0 00:10:46.198 Size: 112 00:10:46.198 FDP Configuration Descriptor: 0 00:10:46.198 Descriptor Size: 96 00:10:46.198 Reclaim Group Identifier format: 2 00:10:46.198 FDP Volatile Write Cache: Not Present 00:10:46.198 FDP Configuration: Valid 00:10:46.198 Vendor Specific Size: 0 00:10:46.198 Number of Reclaim Groups: 2 00:10:46.198 Number of Recalim Unit Handles: 8 00:10:46.198 Max Placement Identifiers: 128 00:10:46.198 Number of Namespaces Suppprted: 256 00:10:46.198 Reclaim unit Nominal Size: 6000000 bytes 00:10:46.198 Estimated Reclaim Unit Time Limit: Not Reported 00:10:46.198 RUH Desc #000: RUH Type: Initially Isolated 00:10:46.198 RUH Desc #001: RUH Type: Initially Isolated 00:10:46.198 RUH Desc #002: RUH Type: Initially Isolated 00:10:46.198 RUH Desc #003: RUH Type: Initially Isolated 00:10:46.198 RUH Desc #004: RUH Type: Initially Isolated 00:10:46.198 RUH Desc #005: RUH Type: Initially Isolated 00:10:46.198 RUH Desc #006: RUH Type: Initially Isolated 00:10:46.198 RUH Desc #007: RUH Type: Initially Isolated 00:10:46.198 00:10:46.198 FDP reclaim unit handle usage log page 00:10:46.198 ====================================== 00:10:46.198 Number of Reclaim Unit Handles: 8 00:10:46.198 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:10:46.198 RUH Usage Desc #001: RUH Attributes: Unused 00:10:46.198 RUH Usage Desc #002: RUH Attributes: Unused 00:10:46.198 RUH Usage Desc #003: RUH Attributes: Unused 00:10:46.198 RUH Usage Desc #004: RUH Attributes: Unused 00:10:46.198 RUH Usage Desc #005: RUH Attributes: Unused 00:10:46.198 RUH Usage Desc #006: RUH Attributes: Unused 00:10:46.198 RUH Usage Desc #007: RUH Attributes: Unused 00:10:46.198 00:10:46.198 FDP statistics log page 00:10:46.198 ======================= 00:10:46.198 Host bytes with metadata written: 1804115968 00:10:46.198 Media bytes with metadata written: 1804386304 00:10:46.198 Media bytes erased: 0 00:10:46.198 00:10:46.198 FDP Reclaim unit handle status 00:10:46.198 ============================== 00:10:46.198 Number of RUHS descriptors: 2 00:10:46.198 RUHS Desc: #0000 PID: 0x0000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000000776 00:10:46.198 RUHS Desc: #0001 PID: 0x4000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000006000 00:10:46.198 00:10:46.198 FDP write on placement id: 0 success 00:10:46.198 00:10:46.198 Set Feature: Enabling FDP events on Placement handle: #0 Success 00:10:46.198 00:10:46.198 IO mgmt send: RUH update for Placement ID: #0 Success 00:10:46.198 00:10:46.198 Get Feature: FDP Events for Placement handle: #0 00:10:46.198 ======================== 00:10:46.198 Number of FDP Events: 6 00:10:46.198 FDP Event: #0 Type: RU Not Written to Capacity Enabled: Yes 00:10:46.198 FDP Event: #1 Type: RU Time Limit Exceeded Enabled: Yes 00:10:46.198 FDP Event: #2 Type: Ctrlr Reset Modified RUH's Enabled: Yes 00:10:46.198 FDP Event: #3 Type: Invalid Placement Identifier Enabled: Yes 00:10:46.198 FDP Event: #4 Type: Media Reallocated Enabled: No 00:10:46.198 FDP Event: #5 Type: Implicitly modified RUH Enabled: No 00:10:46.198 00:10:46.198 FDP events log page 00:10:46.198 =================== 00:10:46.198 Number of FDP events: 1 00:10:46.198 FDP Event #0: 00:10:46.198 Event Type: RU Not Written to Capacity 00:10:46.198 Placement Identifier: Valid 00:10:46.198 NSID: Valid 00:10:46.198 Location: Valid 00:10:46.198 Placement Identifier: 0 00:10:46.198 Event Timestamp: 3 00:10:46.198 Namespace Identifier: 1 00:10:46.198 Reclaim Group Identifier: 0 00:10:46.198 Reclaim Unit Handle Identifier: 0 00:10:46.198 00:10:46.198 FDP test passed 00:10:46.198 ************************************ 00:10:46.198 00:10:46.198 real 0m0.241s 00:10:46.198 user 0m0.065s 00:10:46.198 sys 0m0.074s 00:10:46.198 00:15:00 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1122 -- # xtrace_disable 00:10:46.198 00:15:00 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@10 -- # set +x 00:10:46.198 END TEST nvme_flexible_data_placement 00:10:46.198 ************************************ 00:10:46.198 ************************************ 00:10:46.198 END TEST nvme_fdp 00:10:46.198 ************************************ 00:10:46.198 00:10:46.198 real 0m8.748s 00:10:46.198 user 0m1.396s 00:10:46.198 sys 0m2.374s 00:10:46.198 00:15:00 nvme_fdp -- common/autotest_common.sh@1122 -- # xtrace_disable 00:10:46.198 00:15:00 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:10:46.198 00:15:00 -- spdk/autotest.sh@236 -- # [[ '' -eq 1 ]] 00:10:46.198 00:15:00 -- spdk/autotest.sh@240 -- # run_test nvme_rpc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:10:46.198 00:15:00 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:10:46.198 00:15:00 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:10:46.198 00:15:00 -- common/autotest_common.sh@10 -- # set +x 00:10:46.198 ************************************ 00:10:46.198 START TEST nvme_rpc 00:10:46.198 ************************************ 00:10:46.198 00:15:00 nvme_rpc -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:10:46.457 * Looking for test storage... 00:10:46.457 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:10:46.457 00:15:00 nvme_rpc -- nvme/nvme_rpc.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:46.457 00:15:00 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # get_first_nvme_bdf 00:10:46.457 00:15:00 nvme_rpc -- common/autotest_common.sh@1520 -- # bdfs=() 00:10:46.457 00:15:00 nvme_rpc -- common/autotest_common.sh@1520 -- # local bdfs 00:10:46.457 00:15:00 nvme_rpc -- common/autotest_common.sh@1521 -- # bdfs=($(get_nvme_bdfs)) 00:10:46.457 00:15:00 nvme_rpc -- common/autotest_common.sh@1521 -- # get_nvme_bdfs 00:10:46.457 00:15:00 nvme_rpc -- common/autotest_common.sh@1509 -- # bdfs=() 00:10:46.457 00:15:00 nvme_rpc -- common/autotest_common.sh@1509 -- # local bdfs 00:10:46.457 00:15:00 nvme_rpc -- common/autotest_common.sh@1510 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:10:46.457 00:15:00 nvme_rpc -- common/autotest_common.sh@1510 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:10:46.457 00:15:00 nvme_rpc -- common/autotest_common.sh@1510 -- # jq -r '.config[].params.traddr' 00:10:46.457 00:15:01 nvme_rpc -- common/autotest_common.sh@1511 -- # (( 4 == 0 )) 00:10:46.457 00:15:01 nvme_rpc -- common/autotest_common.sh@1515 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:10:46.457 00:15:01 nvme_rpc -- common/autotest_common.sh@1523 -- # echo 0000:00:10.0 00:10:46.457 00:15:01 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # bdf=0000:00:10.0 00:10:46.457 00:15:01 nvme_rpc -- nvme/nvme_rpc.sh@16 -- # spdk_tgt_pid=82911 00:10:46.457 00:15:01 nvme_rpc -- nvme/nvme_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:10:46.457 00:15:01 nvme_rpc -- nvme/nvme_rpc.sh@17 -- # trap 'kill -9 ${spdk_tgt_pid}; exit 1' SIGINT SIGTERM EXIT 00:10:46.457 00:15:01 nvme_rpc -- nvme/nvme_rpc.sh@19 -- # waitforlisten 82911 00:10:46.457 00:15:01 nvme_rpc -- common/autotest_common.sh@827 -- # '[' -z 82911 ']' 00:10:46.457 00:15:01 nvme_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:46.457 00:15:01 nvme_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:10:46.457 00:15:01 nvme_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:46.457 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:46.457 00:15:01 nvme_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:10:46.457 00:15:01 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:46.457 [2024-07-23 00:15:01.127333] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:10:46.457 [2024-07-23 00:15:01.127679] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82911 ] 00:10:46.715 [2024-07-23 00:15:01.278891] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:46.715 [2024-07-23 00:15:01.325276] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:46.715 [2024-07-23 00:15:01.325375] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:47.281 00:15:01 nvme_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:10:47.281 00:15:01 nvme_rpc -- common/autotest_common.sh@860 -- # return 0 00:10:47.281 00:15:01 nvme_rpc -- nvme/nvme_rpc.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 00:10:47.539 Nvme0n1 00:10:47.539 00:15:02 nvme_rpc -- nvme/nvme_rpc.sh@27 -- # '[' -f non_existing_file ']' 00:10:47.539 00:15:02 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_apply_firmware non_existing_file Nvme0n1 00:10:47.798 request: 00:10:47.798 { 00:10:47.798 "filename": "non_existing_file", 00:10:47.798 "bdev_name": "Nvme0n1", 00:10:47.798 "method": "bdev_nvme_apply_firmware", 00:10:47.798 "req_id": 1 00:10:47.798 } 00:10:47.798 Got JSON-RPC error response 00:10:47.798 response: 00:10:47.798 { 00:10:47.798 "code": -32603, 00:10:47.798 "message": "open file failed." 00:10:47.798 } 00:10:47.798 00:15:02 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # rv=1 00:10:47.798 00:15:02 nvme_rpc -- nvme/nvme_rpc.sh@33 -- # '[' -z 1 ']' 00:10:47.798 00:15:02 nvme_rpc -- nvme/nvme_rpc.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:10:48.057 00:15:02 nvme_rpc -- nvme/nvme_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:10:48.057 00:15:02 nvme_rpc -- nvme/nvme_rpc.sh@40 -- # killprocess 82911 00:10:48.057 00:15:02 nvme_rpc -- common/autotest_common.sh@946 -- # '[' -z 82911 ']' 00:10:48.057 00:15:02 nvme_rpc -- common/autotest_common.sh@950 -- # kill -0 82911 00:10:48.057 00:15:02 nvme_rpc -- common/autotest_common.sh@951 -- # uname 00:10:48.057 00:15:02 nvme_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:10:48.057 00:15:02 nvme_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 82911 00:10:48.057 killing process with pid 82911 00:10:48.057 00:15:02 nvme_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:10:48.057 00:15:02 nvme_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:10:48.057 00:15:02 nvme_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 82911' 00:10:48.057 00:15:02 nvme_rpc -- common/autotest_common.sh@965 -- # kill 82911 00:10:48.057 00:15:02 nvme_rpc -- common/autotest_common.sh@970 -- # wait 82911 00:10:48.315 ************************************ 00:10:48.315 END TEST nvme_rpc 00:10:48.315 ************************************ 00:10:48.315 00:10:48.315 real 0m2.179s 00:10:48.315 user 0m3.906s 00:10:48.315 sys 0m0.683s 00:10:48.315 00:15:02 nvme_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:10:48.315 00:15:02 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:48.573 00:15:03 -- spdk/autotest.sh@241 -- # run_test nvme_rpc_timeouts /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:10:48.574 00:15:03 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:10:48.574 00:15:03 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:10:48.574 00:15:03 -- common/autotest_common.sh@10 -- # set +x 00:10:48.574 ************************************ 00:10:48.574 START TEST nvme_rpc_timeouts 00:10:48.574 ************************************ 00:10:48.574 00:15:03 nvme_rpc_timeouts -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:10:48.574 * Looking for test storage... 00:10:48.574 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:10:48.574 00:15:03 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@19 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:48.574 00:15:03 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@21 -- # tmpfile_default_settings=/tmp/settings_default_82965 00:10:48.574 00:15:03 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@22 -- # tmpfile_modified_settings=/tmp/settings_modified_82965 00:10:48.574 00:15:03 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@25 -- # spdk_tgt_pid=82989 00:10:48.574 00:15:03 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:10:48.574 00:15:03 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@26 -- # trap 'kill -9 ${spdk_tgt_pid}; rm -f ${tmpfile_default_settings} ${tmpfile_modified_settings} ; exit 1' SIGINT SIGTERM EXIT 00:10:48.574 00:15:03 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@27 -- # waitforlisten 82989 00:10:48.574 00:15:03 nvme_rpc_timeouts -- common/autotest_common.sh@827 -- # '[' -z 82989 ']' 00:10:48.574 00:15:03 nvme_rpc_timeouts -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:48.574 00:15:03 nvme_rpc_timeouts -- common/autotest_common.sh@832 -- # local max_retries=100 00:10:48.574 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:48.574 00:15:03 nvme_rpc_timeouts -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:48.574 00:15:03 nvme_rpc_timeouts -- common/autotest_common.sh@836 -- # xtrace_disable 00:10:48.574 00:15:03 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:10:48.850 [2024-07-23 00:15:03.260929] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:10:48.850 [2024-07-23 00:15:03.261064] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82989 ] 00:10:48.850 [2024-07-23 00:15:03.412480] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:48.850 [2024-07-23 00:15:03.455386] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:48.850 [2024-07-23 00:15:03.455486] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:49.417 Checking default timeout settings: 00:10:49.417 00:15:04 nvme_rpc_timeouts -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:10:49.417 00:15:04 nvme_rpc_timeouts -- common/autotest_common.sh@860 -- # return 0 00:10:49.417 00:15:04 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@29 -- # echo Checking default timeout settings: 00:10:49.417 00:15:04 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:10:49.984 Making settings changes with rpc: 00:10:49.984 00:15:04 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@32 -- # echo Making settings changes with rpc: 00:10:49.984 00:15:04 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_set_options --timeout-us=12000000 --timeout-admin-us=24000000 --action-on-timeout=abort 00:10:49.984 Check default vs. modified settings: 00:10:49.984 00:15:04 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@36 -- # echo Check default vs. modified settings: 00:10:49.984 00:15:04 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:10:50.242 00:15:04 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@38 -- # settings_to_check='action_on_timeout timeout_us timeout_admin_us' 00:10:50.242 00:15:04 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:10:50.242 00:15:04 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep action_on_timeout /tmp/settings_default_82965 00:10:50.242 00:15:04 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:10:50.242 00:15:04 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:10:50.242 00:15:04 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=none 00:10:50.242 00:15:04 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep action_on_timeout /tmp/settings_modified_82965 00:10:50.242 00:15:04 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:10:50.242 00:15:04 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:10:50.242 Setting action_on_timeout is changed as expected. 00:10:50.242 00:15:04 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=abort 00:10:50.242 00:15:04 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' none == abort ']' 00:10:50.242 00:15:04 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting action_on_timeout is changed as expected. 00:10:50.242 00:15:04 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:10:50.242 00:15:04 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_us /tmp/settings_default_82965 00:10:50.242 00:15:04 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:10:50.242 00:15:04 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:10:50.242 00:15:04 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:10:50.242 00:15:04 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_us /tmp/settings_modified_82965 00:10:50.242 00:15:04 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:10:50.242 00:15:04 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:10:50.242 Setting timeout_us is changed as expected. 00:10:50.243 00:15:04 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=12000000 00:10:50.243 00:15:04 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 12000000 ']' 00:10:50.243 00:15:04 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_us is changed as expected. 00:10:50.243 00:15:04 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:10:50.243 00:15:04 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_admin_us /tmp/settings_default_82965 00:10:50.243 00:15:04 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:10:50.502 00:15:04 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:10:50.502 00:15:04 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:10:50.502 00:15:04 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_admin_us /tmp/settings_modified_82965 00:10:50.502 00:15:04 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:10:50.502 00:15:04 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:10:50.502 Setting timeout_admin_us is changed as expected. 00:10:50.502 00:15:04 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=24000000 00:10:50.502 00:15:04 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 24000000 ']' 00:10:50.502 00:15:04 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_admin_us is changed as expected. 00:10:50.502 00:15:04 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@52 -- # trap - SIGINT SIGTERM EXIT 00:10:50.502 00:15:04 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@53 -- # rm -f /tmp/settings_default_82965 /tmp/settings_modified_82965 00:10:50.502 00:15:04 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@54 -- # killprocess 82989 00:10:50.502 00:15:04 nvme_rpc_timeouts -- common/autotest_common.sh@946 -- # '[' -z 82989 ']' 00:10:50.502 00:15:04 nvme_rpc_timeouts -- common/autotest_common.sh@950 -- # kill -0 82989 00:10:50.502 00:15:04 nvme_rpc_timeouts -- common/autotest_common.sh@951 -- # uname 00:10:50.502 00:15:04 nvme_rpc_timeouts -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:10:50.502 00:15:04 nvme_rpc_timeouts -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 82989 00:10:50.502 killing process with pid 82989 00:10:50.502 00:15:04 nvme_rpc_timeouts -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:10:50.502 00:15:04 nvme_rpc_timeouts -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:10:50.502 00:15:04 nvme_rpc_timeouts -- common/autotest_common.sh@964 -- # echo 'killing process with pid 82989' 00:10:50.502 00:15:04 nvme_rpc_timeouts -- common/autotest_common.sh@965 -- # kill 82989 00:10:50.502 00:15:04 nvme_rpc_timeouts -- common/autotest_common.sh@970 -- # wait 82989 00:10:50.761 RPC TIMEOUT SETTING TEST PASSED. 00:10:50.761 00:15:05 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@56 -- # echo RPC TIMEOUT SETTING TEST PASSED. 00:10:50.761 ************************************ 00:10:50.761 END TEST nvme_rpc_timeouts 00:10:50.761 ************************************ 00:10:50.761 00:10:50.761 real 0m2.335s 00:10:50.761 user 0m4.484s 00:10:50.761 sys 0m0.677s 00:10:50.761 00:15:05 nvme_rpc_timeouts -- common/autotest_common.sh@1122 -- # xtrace_disable 00:10:50.761 00:15:05 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:10:50.761 00:15:05 -- spdk/autotest.sh@243 -- # uname -s 00:10:50.761 00:15:05 -- spdk/autotest.sh@243 -- # '[' Linux = Linux ']' 00:10:50.761 00:15:05 -- spdk/autotest.sh@244 -- # run_test sw_hotplug /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:10:50.761 00:15:05 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:10:50.761 00:15:05 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:10:50.761 00:15:05 -- common/autotest_common.sh@10 -- # set +x 00:10:51.021 ************************************ 00:10:51.021 START TEST sw_hotplug 00:10:51.021 ************************************ 00:10:51.021 00:15:05 sw_hotplug -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:10:51.021 * Looking for test storage... 00:10:51.021 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:10:51.021 00:15:05 sw_hotplug -- nvme/sw_hotplug.sh@122 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:10:51.589 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:51.848 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:10:51.848 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:10:51.848 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:10:51.848 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:10:51.848 00:15:06 sw_hotplug -- nvme/sw_hotplug.sh@124 -- # hotplug_wait=6 00:10:51.848 00:15:06 sw_hotplug -- nvme/sw_hotplug.sh@125 -- # hotplug_events=3 00:10:51.848 00:15:06 sw_hotplug -- nvme/sw_hotplug.sh@126 -- # nvmes=($(nvme_in_userspace)) 00:10:51.848 00:15:06 sw_hotplug -- nvme/sw_hotplug.sh@126 -- # nvme_in_userspace 00:10:51.848 00:15:06 sw_hotplug -- scripts/common.sh@309 -- # local bdf bdfs 00:10:51.848 00:15:06 sw_hotplug -- scripts/common.sh@310 -- # local nvmes 00:10:51.848 00:15:06 sw_hotplug -- scripts/common.sh@312 -- # [[ -n '' ]] 00:10:51.848 00:15:06 sw_hotplug -- scripts/common.sh@315 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:10:51.848 00:15:06 sw_hotplug -- scripts/common.sh@315 -- # iter_pci_class_code 01 08 02 00:10:51.848 00:15:06 sw_hotplug -- scripts/common.sh@295 -- # local bdf= 00:10:51.848 00:15:06 sw_hotplug -- scripts/common.sh@297 -- # iter_all_pci_class_code 01 08 02 00:10:51.848 00:15:06 sw_hotplug -- scripts/common.sh@230 -- # local class 00:10:51.848 00:15:06 sw_hotplug -- scripts/common.sh@231 -- # local subclass 00:10:51.848 00:15:06 sw_hotplug -- scripts/common.sh@232 -- # local progif 00:10:51.848 00:15:06 sw_hotplug -- scripts/common.sh@233 -- # printf %02x 1 00:10:51.848 00:15:06 sw_hotplug -- scripts/common.sh@233 -- # class=01 00:10:51.848 00:15:06 sw_hotplug -- scripts/common.sh@234 -- # printf %02x 8 00:10:51.848 00:15:06 sw_hotplug -- scripts/common.sh@234 -- # subclass=08 00:10:51.848 00:15:06 sw_hotplug -- scripts/common.sh@235 -- # printf %02x 2 00:10:51.848 00:15:06 sw_hotplug -- scripts/common.sh@235 -- # progif=02 00:10:51.848 00:15:06 sw_hotplug -- scripts/common.sh@237 -- # hash lspci 00:10:51.848 00:15:06 sw_hotplug -- scripts/common.sh@238 -- # '[' 02 '!=' 00 ']' 00:10:51.848 00:15:06 sw_hotplug -- scripts/common.sh@239 -- # lspci -mm -n -D 00:10:51.848 00:15:06 sw_hotplug -- scripts/common.sh@240 -- # grep -i -- -p02 00:10:51.848 00:15:06 sw_hotplug -- scripts/common.sh@241 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:10:51.848 00:15:06 sw_hotplug -- scripts/common.sh@242 -- # tr -d '"' 00:10:51.848 00:15:06 sw_hotplug -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:10:51.848 00:15:06 sw_hotplug -- scripts/common.sh@298 -- # pci_can_use 0000:00:10.0 00:10:51.848 00:15:06 sw_hotplug -- scripts/common.sh@15 -- # local i 00:10:51.848 00:15:06 sw_hotplug -- scripts/common.sh@18 -- # [[ =~ 0000:00:10.0 ]] 00:10:51.848 00:15:06 sw_hotplug -- scripts/common.sh@22 -- # [[ -z '' ]] 00:10:51.848 00:15:06 sw_hotplug -- scripts/common.sh@24 -- # return 0 00:10:51.848 00:15:06 sw_hotplug -- scripts/common.sh@299 -- # echo 0000:00:10.0 00:10:51.848 00:15:06 sw_hotplug -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:10:51.848 00:15:06 sw_hotplug -- scripts/common.sh@298 -- # pci_can_use 0000:00:11.0 00:10:51.848 00:15:06 sw_hotplug -- scripts/common.sh@15 -- # local i 00:10:51.848 00:15:06 sw_hotplug -- scripts/common.sh@18 -- # [[ =~ 0000:00:11.0 ]] 00:10:51.848 00:15:06 sw_hotplug -- scripts/common.sh@22 -- # [[ -z '' ]] 00:10:51.848 00:15:06 sw_hotplug -- scripts/common.sh@24 -- # return 0 00:10:51.848 00:15:06 sw_hotplug -- scripts/common.sh@299 -- # echo 0000:00:11.0 00:10:51.848 00:15:06 sw_hotplug -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:10:51.848 00:15:06 sw_hotplug -- scripts/common.sh@298 -- # pci_can_use 0000:00:12.0 00:10:51.848 00:15:06 sw_hotplug -- scripts/common.sh@15 -- # local i 00:10:51.848 00:15:06 sw_hotplug -- scripts/common.sh@18 -- # [[ =~ 0000:00:12.0 ]] 00:10:51.848 00:15:06 sw_hotplug -- scripts/common.sh@22 -- # [[ -z '' ]] 00:10:51.848 00:15:06 sw_hotplug -- scripts/common.sh@24 -- # return 0 00:10:51.848 00:15:06 sw_hotplug -- scripts/common.sh@299 -- # echo 0000:00:12.0 00:10:51.848 00:15:06 sw_hotplug -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:10:51.848 00:15:06 sw_hotplug -- scripts/common.sh@298 -- # pci_can_use 0000:00:13.0 00:10:51.849 00:15:06 sw_hotplug -- scripts/common.sh@15 -- # local i 00:10:51.849 00:15:06 sw_hotplug -- scripts/common.sh@18 -- # [[ =~ 0000:00:13.0 ]] 00:10:51.849 00:15:06 sw_hotplug -- scripts/common.sh@22 -- # [[ -z '' ]] 00:10:51.849 00:15:06 sw_hotplug -- scripts/common.sh@24 -- # return 0 00:10:51.849 00:15:06 sw_hotplug -- scripts/common.sh@299 -- # echo 0000:00:13.0 00:10:51.849 00:15:06 sw_hotplug -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:10:51.849 00:15:06 sw_hotplug -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:10:51.849 00:15:06 sw_hotplug -- scripts/common.sh@320 -- # uname -s 00:10:51.849 00:15:06 sw_hotplug -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:10:51.849 00:15:06 sw_hotplug -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:10:51.849 00:15:06 sw_hotplug -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:10:51.849 00:15:06 sw_hotplug -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:10:51.849 00:15:06 sw_hotplug -- scripts/common.sh@320 -- # uname -s 00:10:51.849 00:15:06 sw_hotplug -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:10:51.849 00:15:06 sw_hotplug -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:10:51.849 00:15:06 sw_hotplug -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:10:51.849 00:15:06 sw_hotplug -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:12.0 ]] 00:10:51.849 00:15:06 sw_hotplug -- scripts/common.sh@320 -- # uname -s 00:10:51.849 00:15:06 sw_hotplug -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:10:51.849 00:15:06 sw_hotplug -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:10:51.849 00:15:06 sw_hotplug -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:10:51.849 00:15:06 sw_hotplug -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:13.0 ]] 00:10:51.849 00:15:06 sw_hotplug -- scripts/common.sh@320 -- # uname -s 00:10:51.849 00:15:06 sw_hotplug -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:10:51.849 00:15:06 sw_hotplug -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:10:51.849 00:15:06 sw_hotplug -- scripts/common.sh@325 -- # (( 4 )) 00:10:51.849 00:15:06 sw_hotplug -- scripts/common.sh@326 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:10:51.849 00:15:06 sw_hotplug -- nvme/sw_hotplug.sh@127 -- # nvme_count=2 00:10:51.849 00:15:06 sw_hotplug -- nvme/sw_hotplug.sh@128 -- # nvmes=("${nvmes[@]::nvme_count}") 00:10:51.849 00:15:06 sw_hotplug -- nvme/sw_hotplug.sh@130 -- # xtrace_disable 00:10:51.849 00:15:06 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:52.108 00:15:06 sw_hotplug -- nvme/sw_hotplug.sh@135 -- # run_hotplug 00:10:52.108 00:15:06 sw_hotplug -- nvme/sw_hotplug.sh@65 -- # trap 'killprocess $hotplug_pid; exit 1' SIGINT SIGTERM EXIT 00:10:52.108 00:15:06 sw_hotplug -- nvme/sw_hotplug.sh@73 -- # hotplug_pid=83333 00:10:52.108 00:15:06 sw_hotplug -- nvme/sw_hotplug.sh@75 -- # debug_remove_attach_helper 3 6 false 00:10:52.108 00:15:06 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/examples/hotplug -i 0 -t 0 -n 6 -r 6 -l warning 00:10:52.108 00:15:06 sw_hotplug -- nvme/sw_hotplug.sh@14 -- # local helper_time=0 00:10:52.108 00:15:06 sw_hotplug -- nvme/sw_hotplug.sh@16 -- # timing_cmd remove_attach_helper 3 6 false 00:10:52.108 00:15:06 sw_hotplug -- common/autotest_common.sh@706 -- # [[ -t 0 ]] 00:10:52.108 00:15:06 sw_hotplug -- common/autotest_common.sh@706 -- # exec 00:10:52.108 00:15:06 sw_hotplug -- common/autotest_common.sh@708 -- # local time=0 TIMEFORMAT=%2R 00:10:52.108 00:15:06 sw_hotplug -- common/autotest_common.sh@714 -- # remove_attach_helper 3 6 false 00:10:52.108 00:15:06 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # local hotplug_events=3 00:10:52.108 00:15:06 sw_hotplug -- nvme/sw_hotplug.sh@23 -- # local hotplug_wait=6 00:10:52.108 00:15:06 sw_hotplug -- nvme/sw_hotplug.sh@24 -- # local use_bdev=false 00:10:52.108 00:15:06 sw_hotplug -- nvme/sw_hotplug.sh@25 -- # local dev bdfs 00:10:52.108 00:15:06 sw_hotplug -- nvme/sw_hotplug.sh@31 -- # sleep 6 00:10:52.367 Initializing NVMe Controllers 00:10:52.367 Attaching to 0000:00:10.0 00:10:52.367 Attaching to 0000:00:11.0 00:10:52.367 Attaching to 0000:00:12.0 00:10:52.367 Attaching to 0000:00:13.0 00:10:52.367 Attached to 0000:00:10.0 00:10:52.367 Attached to 0000:00:11.0 00:10:52.367 Attached to 0000:00:13.0 00:10:52.367 Attached to 0000:00:12.0 00:10:52.367 Initialization complete. Starting I/O... 00:10:52.367 QEMU NVMe Ctrl (12340 ): 0 I/Os completed (+0) 00:10:52.367 QEMU NVMe Ctrl (12341 ): 0 I/Os completed (+0) 00:10:52.367 QEMU NVMe Ctrl (12343 ): 1 I/Os completed (+1) 00:10:52.367 QEMU NVMe Ctrl (12342 ): 0 I/Os completed (+0) 00:10:52.367 00:10:53.304 QEMU NVMe Ctrl (12340 ): 1408 I/Os completed (+1408) 00:10:53.304 QEMU NVMe Ctrl (12341 ): 1408 I/Os completed (+1408) 00:10:53.304 QEMU NVMe Ctrl (12343 ): 1413 I/Os completed (+1412) 00:10:53.304 QEMU NVMe Ctrl (12342 ): 1417 I/Os completed (+1417) 00:10:53.304 00:10:54.241 QEMU NVMe Ctrl (12340 ): 3036 I/Os completed (+1628) 00:10:54.241 QEMU NVMe Ctrl (12341 ): 3038 I/Os completed (+1630) 00:10:54.241 QEMU NVMe Ctrl (12343 ): 3051 I/Os completed (+1638) 00:10:54.241 QEMU NVMe Ctrl (12342 ): 3060 I/Os completed (+1643) 00:10:54.241 00:10:55.178 QEMU NVMe Ctrl (12340 ): 4908 I/Os completed (+1872) 00:10:55.178 QEMU NVMe Ctrl (12341 ): 4917 I/Os completed (+1879) 00:10:55.178 QEMU NVMe Ctrl (12343 ): 4926 I/Os completed (+1875) 00:10:55.178 QEMU NVMe Ctrl (12342 ): 4936 I/Os completed (+1876) 00:10:55.178 00:10:56.555 QEMU NVMe Ctrl (12340 ): 6901 I/Os completed (+1993) 00:10:56.555 QEMU NVMe Ctrl (12341 ): 6917 I/Os completed (+2000) 00:10:56.555 QEMU NVMe Ctrl (12343 ): 6924 I/Os completed (+1998) 00:10:56.555 QEMU NVMe Ctrl (12342 ): 6940 I/Os completed (+2004) 00:10:56.555 00:10:57.492 QEMU NVMe Ctrl (12340 ): 8877 I/Os completed (+1976) 00:10:57.492 QEMU NVMe Ctrl (12341 ): 8893 I/Os completed (+1976) 00:10:57.492 QEMU NVMe Ctrl (12343 ): 8906 I/Os completed (+1982) 00:10:57.492 QEMU NVMe Ctrl (12342 ): 8916 I/Os completed (+1976) 00:10:57.492 00:10:58.061 00:15:12 sw_hotplug -- nvme/sw_hotplug.sh@33 -- # (( hotplug_events-- )) 00:10:58.061 00:15:12 sw_hotplug -- nvme/sw_hotplug.sh@34 -- # for dev in "${nvmes[@]}" 00:10:58.061 00:15:12 sw_hotplug -- nvme/sw_hotplug.sh@35 -- # echo 1 00:10:58.061 [2024-07-23 00:15:12.632420] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:10:58.061 Controller removed: QEMU NVMe Ctrl (12340 ) 00:10:58.061 [2024-07-23 00:15:12.633960] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:58.061 [2024-07-23 00:15:12.634015] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:58.061 [2024-07-23 00:15:12.634034] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:58.061 [2024-07-23 00:15:12.634052] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:58.061 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:10:58.061 [2024-07-23 00:15:12.636460] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:58.061 [2024-07-23 00:15:12.636577] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:58.061 [2024-07-23 00:15:12.636600] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:58.061 [2024-07-23 00:15:12.636618] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:58.061 00:15:12 sw_hotplug -- nvme/sw_hotplug.sh@34 -- # for dev in "${nvmes[@]}" 00:10:58.061 00:15:12 sw_hotplug -- nvme/sw_hotplug.sh@35 -- # echo 1 00:10:58.061 [2024-07-23 00:15:12.673171] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:10:58.061 Controller removed: QEMU NVMe Ctrl (12341 ) 00:10:58.061 [2024-07-23 00:15:12.674943] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:58.062 [2024-07-23 00:15:12.675143] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:58.062 [2024-07-23 00:15:12.675326] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:58.062 [2024-07-23 00:15:12.675357] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:58.062 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:10:58.062 [2024-07-23 00:15:12.677226] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:58.062 [2024-07-23 00:15:12.677300] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:58.062 [2024-07-23 00:15:12.677348] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:58.062 [2024-07-23 00:15:12.677445] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:58.062 00:15:12 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # false 00:10:58.062 00:15:12 sw_hotplug -- nvme/sw_hotplug.sh@44 -- # echo 1 00:10:58.062 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:10:58.062 EAL: Scan for (pci) bus failed. 00:10:58.321 00:15:12 sw_hotplug -- nvme/sw_hotplug.sh@46 -- # for dev in "${nvmes[@]}" 00:10:58.321 00:15:12 sw_hotplug -- nvme/sw_hotplug.sh@47 -- # echo uio_pci_generic 00:10:58.321 00:15:12 sw_hotplug -- nvme/sw_hotplug.sh@48 -- # echo 0000:00:10.0 00:10:58.321 QEMU NVMe Ctrl (12343 ): 10925 I/Os completed (+2019) 00:10:58.321 QEMU NVMe Ctrl (12342 ): 10931 I/Os completed (+2015) 00:10:58.321 00:10:58.321 00:15:12 sw_hotplug -- nvme/sw_hotplug.sh@49 -- # echo 0000:00:10.0 00:10:58.321 00:15:12 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # echo '' 00:10:58.321 00:15:12 sw_hotplug -- nvme/sw_hotplug.sh@46 -- # for dev in "${nvmes[@]}" 00:10:58.321 00:15:12 sw_hotplug -- nvme/sw_hotplug.sh@47 -- # echo uio_pci_generic 00:10:58.321 00:15:12 sw_hotplug -- nvme/sw_hotplug.sh@48 -- # echo 0000:00:11.0 00:10:58.321 Attaching to 0000:00:10.0 00:10:58.321 Attached to 0000:00:10.0 00:10:58.321 00:15:12 sw_hotplug -- nvme/sw_hotplug.sh@49 -- # echo 0000:00:11.0 00:10:58.579 00:15:13 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # echo '' 00:10:58.579 00:15:13 sw_hotplug -- nvme/sw_hotplug.sh@54 -- # sleep 12 00:10:58.579 Attaching to 0000:00:11.0 00:10:58.579 Attached to 0000:00:11.0 00:10:59.145 QEMU NVMe Ctrl (12343 ): 13005 I/Os completed (+2080) 00:10:59.145 QEMU NVMe Ctrl (12342 ): 13017 I/Os completed (+2086) 00:10:59.145 QEMU NVMe Ctrl (12340 ): 1823 I/Os completed (+1823) 00:10:59.145 QEMU NVMe Ctrl (12341 ): 1585 I/Os completed (+1585) 00:10:59.145 00:11:00.524 QEMU NVMe Ctrl (12343 ): 15008 I/Os completed (+2003) 00:11:00.524 QEMU NVMe Ctrl (12342 ): 15022 I/Os completed (+2005) 00:11:00.524 QEMU NVMe Ctrl (12340 ): 3827 I/Os completed (+2004) 00:11:00.524 QEMU NVMe Ctrl (12341 ): 3587 I/Os completed (+2002) 00:11:00.524 00:11:01.461 QEMU NVMe Ctrl (12343 ): 16904 I/Os completed (+1896) 00:11:01.461 QEMU NVMe Ctrl (12342 ): 16921 I/Os completed (+1899) 00:11:01.461 QEMU NVMe Ctrl (12340 ): 5730 I/Os completed (+1903) 00:11:01.461 QEMU NVMe Ctrl (12341 ): 5485 I/Os completed (+1898) 00:11:01.461 00:11:02.398 QEMU NVMe Ctrl (12343 ): 18888 I/Os completed (+1984) 00:11:02.398 QEMU NVMe Ctrl (12342 ): 18906 I/Os completed (+1985) 00:11:02.398 QEMU NVMe Ctrl (12340 ): 7715 I/Os completed (+1985) 00:11:02.398 QEMU NVMe Ctrl (12341 ): 7470 I/Os completed (+1985) 00:11:02.398 00:11:03.335 QEMU NVMe Ctrl (12343 ): 20892 I/Os completed (+2004) 00:11:03.335 QEMU NVMe Ctrl (12342 ): 20912 I/Os completed (+2006) 00:11:03.335 QEMU NVMe Ctrl (12340 ): 9723 I/Os completed (+2008) 00:11:03.335 QEMU NVMe Ctrl (12341 ): 9474 I/Os completed (+2004) 00:11:03.335 00:11:04.272 QEMU NVMe Ctrl (12343 ): 22892 I/Os completed (+2000) 00:11:04.272 QEMU NVMe Ctrl (12342 ): 22913 I/Os completed (+2001) 00:11:04.272 QEMU NVMe Ctrl (12340 ): 11725 I/Os completed (+2002) 00:11:04.272 QEMU NVMe Ctrl (12341 ): 11481 I/Os completed (+2007) 00:11:04.272 00:11:05.209 QEMU NVMe Ctrl (12343 ): 24896 I/Os completed (+2004) 00:11:05.209 QEMU NVMe Ctrl (12342 ): 24917 I/Os completed (+2004) 00:11:05.209 QEMU NVMe Ctrl (12340 ): 13738 I/Os completed (+2013) 00:11:05.209 QEMU NVMe Ctrl (12341 ): 13491 I/Os completed (+2010) 00:11:05.209 00:11:06.146 QEMU NVMe Ctrl (12343 ): 26894 I/Os completed (+1998) 00:11:06.146 QEMU NVMe Ctrl (12342 ): 26915 I/Os completed (+1998) 00:11:06.146 QEMU NVMe Ctrl (12340 ): 15740 I/Os completed (+2002) 00:11:06.146 QEMU NVMe Ctrl (12341 ): 15494 I/Os completed (+2003) 00:11:06.146 00:11:07.522 QEMU NVMe Ctrl (12343 ): 28898 I/Os completed (+2004) 00:11:07.522 QEMU NVMe Ctrl (12342 ): 28922 I/Os completed (+2007) 00:11:07.522 QEMU NVMe Ctrl (12340 ): 17750 I/Os completed (+2010) 00:11:07.522 QEMU NVMe Ctrl (12341 ): 17505 I/Os completed (+2011) 00:11:07.522 00:11:08.456 QEMU NVMe Ctrl (12343 ): 30898 I/Os completed (+2000) 00:11:08.456 QEMU NVMe Ctrl (12342 ): 30928 I/Os completed (+2006) 00:11:08.456 QEMU NVMe Ctrl (12340 ): 19756 I/Os completed (+2006) 00:11:08.456 QEMU NVMe Ctrl (12341 ): 19514 I/Os completed (+2009) 00:11:08.456 00:11:09.392 QEMU NVMe Ctrl (12343 ): 32862 I/Os completed (+1964) 00:11:09.392 QEMU NVMe Ctrl (12342 ): 32892 I/Os completed (+1964) 00:11:09.392 QEMU NVMe Ctrl (12340 ): 21730 I/Os completed (+1974) 00:11:09.392 QEMU NVMe Ctrl (12341 ): 21486 I/Os completed (+1972) 00:11:09.392 00:11:10.327 QEMU NVMe Ctrl (12343 ): 34858 I/Os completed (+1996) 00:11:10.327 QEMU NVMe Ctrl (12342 ): 34888 I/Os completed (+1996) 00:11:10.327 QEMU NVMe Ctrl (12340 ): 23731 I/Os completed (+2001) 00:11:10.327 QEMU NVMe Ctrl (12341 ): 23486 I/Os completed (+2000) 00:11:10.327 00:11:10.585 00:15:25 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # false 00:11:10.585 00:15:25 sw_hotplug -- nvme/sw_hotplug.sh@33 -- # (( hotplug_events-- )) 00:11:10.585 00:15:25 sw_hotplug -- nvme/sw_hotplug.sh@34 -- # for dev in "${nvmes[@]}" 00:11:10.585 00:15:25 sw_hotplug -- nvme/sw_hotplug.sh@35 -- # echo 1 00:11:10.585 [2024-07-23 00:15:25.021239] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:11:10.585 Controller removed: QEMU NVMe Ctrl (12340 ) 00:11:10.585 [2024-07-23 00:15:25.023139] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:10.585 [2024-07-23 00:15:25.023190] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:10.585 [2024-07-23 00:15:25.023209] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:10.585 [2024-07-23 00:15:25.023231] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:10.585 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:11:10.585 [2024-07-23 00:15:25.027226] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:10.585 [2024-07-23 00:15:25.027393] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:10.585 [2024-07-23 00:15:25.027454] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:10.585 [2024-07-23 00:15:25.027535] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:10.585 00:15:25 sw_hotplug -- nvme/sw_hotplug.sh@34 -- # for dev in "${nvmes[@]}" 00:11:10.585 00:15:25 sw_hotplug -- nvme/sw_hotplug.sh@35 -- # echo 1 00:11:10.585 [2024-07-23 00:15:25.060252] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:11:10.585 Controller removed: QEMU NVMe Ctrl (12341 ) 00:11:10.585 [2024-07-23 00:15:25.061993] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:10.585 [2024-07-23 00:15:25.062075] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:10.585 [2024-07-23 00:15:25.062123] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:10.585 [2024-07-23 00:15:25.062210] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:10.585 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:11:10.585 [2024-07-23 00:15:25.064092] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:10.585 [2024-07-23 00:15:25.064129] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:10.585 [2024-07-23 00:15:25.064151] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:10.585 [2024-07-23 00:15:25.064167] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:10.585 00:15:25 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # false 00:11:10.585 00:15:25 sw_hotplug -- nvme/sw_hotplug.sh@44 -- # echo 1 00:11:10.585 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:11:10.585 EAL: Scan for (pci) bus failed. 00:11:10.585 00:15:25 sw_hotplug -- nvme/sw_hotplug.sh@46 -- # for dev in "${nvmes[@]}" 00:11:10.585 00:15:25 sw_hotplug -- nvme/sw_hotplug.sh@47 -- # echo uio_pci_generic 00:11:10.585 00:15:25 sw_hotplug -- nvme/sw_hotplug.sh@48 -- # echo 0000:00:10.0 00:11:10.843 00:15:25 sw_hotplug -- nvme/sw_hotplug.sh@49 -- # echo 0000:00:10.0 00:11:10.843 00:15:25 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # echo '' 00:11:10.843 00:15:25 sw_hotplug -- nvme/sw_hotplug.sh@46 -- # for dev in "${nvmes[@]}" 00:11:10.843 00:15:25 sw_hotplug -- nvme/sw_hotplug.sh@47 -- # echo uio_pci_generic 00:11:10.843 00:15:25 sw_hotplug -- nvme/sw_hotplug.sh@48 -- # echo 0000:00:11.0 00:11:10.843 Attaching to 0000:00:10.0 00:11:10.843 Attached to 0000:00:10.0 00:11:10.843 00:15:25 sw_hotplug -- nvme/sw_hotplug.sh@49 -- # echo 0000:00:11.0 00:11:10.843 00:15:25 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # echo '' 00:11:10.843 00:15:25 sw_hotplug -- nvme/sw_hotplug.sh@54 -- # sleep 12 00:11:10.843 Attaching to 0000:00:11.0 00:11:10.843 Attached to 0000:00:11.0 00:11:11.409 QEMU NVMe Ctrl (12343 ): 36973 I/Os completed (+2115) 00:11:11.409 QEMU NVMe Ctrl (12342 ): 37008 I/Os completed (+2120) 00:11:11.409 QEMU NVMe Ctrl (12340 ): 1001 I/Os completed (+1001) 00:11:11.409 QEMU NVMe Ctrl (12341 ): 787 I/Os completed (+787) 00:11:11.409 00:11:12.346 QEMU NVMe Ctrl (12343 ): 38945 I/Os completed (+1972) 00:11:12.346 QEMU NVMe Ctrl (12342 ): 38980 I/Os completed (+1972) 00:11:12.346 QEMU NVMe Ctrl (12340 ): 2977 I/Os completed (+1976) 00:11:12.346 QEMU NVMe Ctrl (12341 ): 2759 I/Os completed (+1972) 00:11:12.346 00:11:13.281 QEMU NVMe Ctrl (12343 ): 40933 I/Os completed (+1988) 00:11:13.281 QEMU NVMe Ctrl (12342 ): 40971 I/Os completed (+1991) 00:11:13.281 QEMU NVMe Ctrl (12340 ): 4977 I/Os completed (+2000) 00:11:13.281 QEMU NVMe Ctrl (12341 ): 4757 I/Os completed (+1998) 00:11:13.281 00:11:14.219 QEMU NVMe Ctrl (12343 ): 42929 I/Os completed (+1996) 00:11:14.219 QEMU NVMe Ctrl (12342 ): 42967 I/Os completed (+1996) 00:11:14.219 QEMU NVMe Ctrl (12340 ): 6974 I/Os completed (+1997) 00:11:14.219 QEMU NVMe Ctrl (12341 ): 6759 I/Os completed (+2002) 00:11:14.219 00:11:15.207 QEMU NVMe Ctrl (12343 ): 44893 I/Os completed (+1964) 00:11:15.207 QEMU NVMe Ctrl (12342 ): 44931 I/Os completed (+1964) 00:11:15.207 QEMU NVMe Ctrl (12340 ): 8941 I/Os completed (+1967) 00:11:15.207 QEMU NVMe Ctrl (12341 ): 8727 I/Os completed (+1968) 00:11:15.207 00:11:16.143 QEMU NVMe Ctrl (12343 ): 46853 I/Os completed (+1960) 00:11:16.143 QEMU NVMe Ctrl (12342 ): 46891 I/Os completed (+1960) 00:11:16.143 QEMU NVMe Ctrl (12340 ): 10903 I/Os completed (+1962) 00:11:16.143 QEMU NVMe Ctrl (12341 ): 10687 I/Os completed (+1960) 00:11:16.143 00:11:17.522 QEMU NVMe Ctrl (12343 ): 48809 I/Os completed (+1956) 00:11:17.522 QEMU NVMe Ctrl (12342 ): 48847 I/Os completed (+1956) 00:11:17.522 QEMU NVMe Ctrl (12340 ): 12861 I/Os completed (+1958) 00:11:17.522 QEMU NVMe Ctrl (12341 ): 12645 I/Os completed (+1958) 00:11:17.522 00:11:18.460 QEMU NVMe Ctrl (12343 ): 50765 I/Os completed (+1956) 00:11:18.460 QEMU NVMe Ctrl (12342 ): 50805 I/Os completed (+1958) 00:11:18.460 QEMU NVMe Ctrl (12340 ): 14821 I/Os completed (+1960) 00:11:18.460 QEMU NVMe Ctrl (12341 ): 14601 I/Os completed (+1956) 00:11:18.460 00:11:19.399 QEMU NVMe Ctrl (12343 ): 52737 I/Os completed (+1972) 00:11:19.399 QEMU NVMe Ctrl (12342 ): 52777 I/Os completed (+1972) 00:11:19.399 QEMU NVMe Ctrl (12340 ): 16803 I/Os completed (+1982) 00:11:19.399 QEMU NVMe Ctrl (12341 ): 16573 I/Os completed (+1972) 00:11:19.399 00:11:20.337 QEMU NVMe Ctrl (12343 ): 54673 I/Os completed (+1936) 00:11:20.337 QEMU NVMe Ctrl (12342 ): 54714 I/Os completed (+1937) 00:11:20.337 QEMU NVMe Ctrl (12340 ): 18751 I/Os completed (+1948) 00:11:20.337 QEMU NVMe Ctrl (12341 ): 18509 I/Os completed (+1936) 00:11:20.337 00:11:21.273 QEMU NVMe Ctrl (12343 ): 56621 I/Os completed (+1948) 00:11:21.273 QEMU NVMe Ctrl (12342 ): 56665 I/Os completed (+1951) 00:11:21.273 QEMU NVMe Ctrl (12340 ): 20705 I/Os completed (+1954) 00:11:21.273 QEMU NVMe Ctrl (12341 ): 20464 I/Os completed (+1955) 00:11:21.273 00:11:22.210 QEMU NVMe Ctrl (12343 ): 58481 I/Os completed (+1860) 00:11:22.210 QEMU NVMe Ctrl (12342 ): 58525 I/Os completed (+1860) 00:11:22.210 QEMU NVMe Ctrl (12340 ): 22571 I/Os completed (+1866) 00:11:22.210 QEMU NVMe Ctrl (12341 ): 22324 I/Os completed (+1860) 00:11:22.210 00:11:22.778 00:15:37 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # false 00:11:22.778 00:15:37 sw_hotplug -- nvme/sw_hotplug.sh@33 -- # (( hotplug_events-- )) 00:11:22.778 00:15:37 sw_hotplug -- nvme/sw_hotplug.sh@34 -- # for dev in "${nvmes[@]}" 00:11:22.778 00:15:37 sw_hotplug -- nvme/sw_hotplug.sh@35 -- # echo 1 00:11:22.778 [2024-07-23 00:15:37.405464] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:11:22.778 Controller removed: QEMU NVMe Ctrl (12340 ) 00:11:22.778 [2024-07-23 00:15:37.407640] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:22.778 [2024-07-23 00:15:37.407805] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:22.778 [2024-07-23 00:15:37.407867] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:22.778 [2024-07-23 00:15:37.407925] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:22.778 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:11:22.778 [2024-07-23 00:15:37.410197] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:22.778 [2024-07-23 00:15:37.410364] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:22.778 [2024-07-23 00:15:37.410417] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:22.778 [2024-07-23 00:15:37.410512] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:22.778 00:15:37 sw_hotplug -- nvme/sw_hotplug.sh@34 -- # for dev in "${nvmes[@]}" 00:11:22.779 00:15:37 sw_hotplug -- nvme/sw_hotplug.sh@35 -- # echo 1 00:11:22.779 [2024-07-23 00:15:37.446672] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:11:22.779 Controller removed: QEMU NVMe Ctrl (12341 ) 00:11:22.779 [2024-07-23 00:15:37.448435] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:22.779 [2024-07-23 00:15:37.448484] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:22.779 [2024-07-23 00:15:37.448504] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:22.779 [2024-07-23 00:15:37.448521] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:22.779 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:11:22.779 [2024-07-23 00:15:37.450454] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:22.779 [2024-07-23 00:15:37.450497] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:22.779 [2024-07-23 00:15:37.450532] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:22.779 [2024-07-23 00:15:37.450548] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:23.038 00:15:37 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # false 00:11:23.038 00:15:37 sw_hotplug -- nvme/sw_hotplug.sh@44 -- # echo 1 00:11:23.038 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:11:23.038 EAL: Scan for (pci) bus failed. 00:11:23.038 00:15:37 sw_hotplug -- nvme/sw_hotplug.sh@46 -- # for dev in "${nvmes[@]}" 00:11:23.038 00:15:37 sw_hotplug -- nvme/sw_hotplug.sh@47 -- # echo uio_pci_generic 00:11:23.038 00:15:37 sw_hotplug -- nvme/sw_hotplug.sh@48 -- # echo 0000:00:10.0 00:11:23.038 00:15:37 sw_hotplug -- nvme/sw_hotplug.sh@49 -- # echo 0000:00:10.0 00:11:23.038 00:15:37 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # echo '' 00:11:23.038 00:15:37 sw_hotplug -- nvme/sw_hotplug.sh@46 -- # for dev in "${nvmes[@]}" 00:11:23.038 00:15:37 sw_hotplug -- nvme/sw_hotplug.sh@47 -- # echo uio_pci_generic 00:11:23.038 00:15:37 sw_hotplug -- nvme/sw_hotplug.sh@48 -- # echo 0000:00:11.0 00:11:23.038 Attaching to 0000:00:10.0 00:11:23.038 Attached to 0000:00:10.0 00:11:23.297 QEMU NVMe Ctrl (12343 ): 60417 I/Os completed (+1936) 00:11:23.297 QEMU NVMe Ctrl (12342 ): 60468 I/Os completed (+1943) 00:11:23.297 QEMU NVMe Ctrl (12340 ): 109 I/Os completed (+109) 00:11:23.297 00:11:23.297 00:15:37 sw_hotplug -- nvme/sw_hotplug.sh@49 -- # echo 0000:00:11.0 00:11:23.297 00:15:37 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # echo '' 00:11:23.297 00:15:37 sw_hotplug -- nvme/sw_hotplug.sh@54 -- # sleep 12 00:11:23.297 Attaching to 0000:00:11.0 00:11:23.297 Attached to 0000:00:11.0 00:11:23.297 unregister_dev: QEMU NVMe Ctrl (12343 ) 00:11:23.297 unregister_dev: QEMU NVMe Ctrl (12342 ) 00:11:23.297 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:11:23.297 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:11:23.297 [2024-07-23 00:15:37.812962] rpc.c: 409:spdk_rpc_close: *WARNING*: spdk_rpc_close: deprecated feature spdk_rpc_close is deprecated to be removed in v24.09 00:11:35.508 00:15:49 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # false 00:11:35.508 00:15:49 sw_hotplug -- nvme/sw_hotplug.sh@33 -- # (( hotplug_events-- )) 00:11:35.508 00:15:49 sw_hotplug -- common/autotest_common.sh@714 -- # time=43.18 00:11:35.508 00:15:49 sw_hotplug -- common/autotest_common.sh@716 -- # echo 43.18 00:11:35.508 00:15:49 sw_hotplug -- nvme/sw_hotplug.sh@16 -- # helper_time=43.18 00:11:35.508 00:15:49 sw_hotplug -- nvme/sw_hotplug.sh@17 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 43.18 2 00:11:35.508 remove_attach_helper took 43.18s to complete (handling 2 nvme drive(s)) 00:15:49 sw_hotplug -- nvme/sw_hotplug.sh@79 -- # sleep 6 00:11:42.088 00:15:55 sw_hotplug -- nvme/sw_hotplug.sh@81 -- # kill -0 83333 00:11:42.088 /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh: line 81: kill: (83333) - No such process 00:11:42.088 00:15:55 sw_hotplug -- nvme/sw_hotplug.sh@83 -- # wait 83333 00:11:42.089 00:15:55 sw_hotplug -- nvme/sw_hotplug.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:11:42.089 00:15:55 sw_hotplug -- nvme/sw_hotplug.sh@138 -- # tgt_run_hotplug 00:11:42.089 00:15:55 sw_hotplug -- nvme/sw_hotplug.sh@95 -- # local dev 00:11:42.089 00:15:55 sw_hotplug -- nvme/sw_hotplug.sh@98 -- # spdk_tgt_pid=83878 00:11:42.089 00:15:55 sw_hotplug -- nvme/sw_hotplug.sh@100 -- # trap 'killprocess ${spdk_tgt_pid}; echo 1 > /sys/bus/pci/rescan; exit 1' SIGINT SIGTERM EXIT 00:11:42.089 00:15:55 sw_hotplug -- nvme/sw_hotplug.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:11:42.089 00:15:55 sw_hotplug -- nvme/sw_hotplug.sh@101 -- # waitforlisten 83878 00:11:42.089 00:15:55 sw_hotplug -- common/autotest_common.sh@827 -- # '[' -z 83878 ']' 00:11:42.089 00:15:55 sw_hotplug -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:42.089 00:15:55 sw_hotplug -- common/autotest_common.sh@832 -- # local max_retries=100 00:11:42.089 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:42.089 00:15:55 sw_hotplug -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:42.089 00:15:55 sw_hotplug -- common/autotest_common.sh@836 -- # xtrace_disable 00:11:42.089 00:15:55 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:42.089 [2024-07-23 00:15:55.915472] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:11:42.089 [2024-07-23 00:15:55.915588] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83878 ] 00:11:42.089 [2024-07-23 00:15:56.065899] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:42.089 [2024-07-23 00:15:56.108368] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:42.089 00:15:56 sw_hotplug -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:11:42.089 00:15:56 sw_hotplug -- common/autotest_common.sh@860 -- # return 0 00:11:42.089 00:15:56 sw_hotplug -- nvme/sw_hotplug.sh@103 -- # for dev in "${!nvmes[@]}" 00:11:42.089 00:15:56 sw_hotplug -- nvme/sw_hotplug.sh@104 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme00 -t PCIe -a 0000:00:10.0 00:11:42.089 00:15:56 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:42.089 00:15:56 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:42.089 Nvme00n1 00:11:42.089 00:15:56 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:42.089 00:15:56 sw_hotplug -- nvme/sw_hotplug.sh@105 -- # waitforbdev Nvme00n1 6 00:11:42.089 00:15:56 sw_hotplug -- common/autotest_common.sh@895 -- # local bdev_name=Nvme00n1 00:11:42.089 00:15:56 sw_hotplug -- common/autotest_common.sh@896 -- # local bdev_timeout=6 00:11:42.089 00:15:56 sw_hotplug -- common/autotest_common.sh@897 -- # local i 00:11:42.089 00:15:56 sw_hotplug -- common/autotest_common.sh@898 -- # [[ -z 6 ]] 00:11:42.089 00:15:56 sw_hotplug -- common/autotest_common.sh@900 -- # rpc_cmd bdev_wait_for_examine 00:11:42.089 00:15:56 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:42.089 00:15:56 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:42.089 00:15:56 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:42.089 00:15:56 sw_hotplug -- common/autotest_common.sh@902 -- # rpc_cmd bdev_get_bdevs -b Nvme00n1 -t 6 00:11:42.089 00:15:56 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:42.089 00:15:56 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:42.347 [ 00:11:42.347 { 00:11:42.347 "name": "Nvme00n1", 00:11:42.347 "aliases": [ 00:11:42.347 "f844e13f-fa20-4f3d-aaa1-02858ab2a5ec" 00:11:42.347 ], 00:11:42.347 "product_name": "NVMe disk", 00:11:42.347 "block_size": 4096, 00:11:42.347 "num_blocks": 1548666, 00:11:42.347 "uuid": "f844e13f-fa20-4f3d-aaa1-02858ab2a5ec", 00:11:42.347 "md_size": 64, 00:11:42.347 "md_interleave": false, 00:11:42.347 "dif_type": 0, 00:11:42.347 "assigned_rate_limits": { 00:11:42.347 "rw_ios_per_sec": 0, 00:11:42.347 "rw_mbytes_per_sec": 0, 00:11:42.347 "r_mbytes_per_sec": 0, 00:11:42.347 "w_mbytes_per_sec": 0 00:11:42.347 }, 00:11:42.347 "claimed": false, 00:11:42.347 "zoned": false, 00:11:42.347 "supported_io_types": { 00:11:42.347 "read": true, 00:11:42.347 "write": true, 00:11:42.347 "unmap": true, 00:11:42.347 "write_zeroes": true, 00:11:42.347 "flush": true, 00:11:42.347 "reset": true, 00:11:42.347 "compare": true, 00:11:42.347 "compare_and_write": false, 00:11:42.347 "abort": true, 00:11:42.347 "nvme_admin": true, 00:11:42.347 "nvme_io": true 00:11:42.347 }, 00:11:42.347 "driver_specific": { 00:11:42.347 "nvme": [ 00:11:42.347 { 00:11:42.347 "pci_address": "0000:00:10.0", 00:11:42.347 "trid": { 00:11:42.347 "trtype": "PCIe", 00:11:42.347 "traddr": "0000:00:10.0" 00:11:42.347 }, 00:11:42.347 "ctrlr_data": { 00:11:42.347 "cntlid": 0, 00:11:42.347 "vendor_id": "0x1b36", 00:11:42.347 "model_number": "QEMU NVMe Ctrl", 00:11:42.347 "serial_number": "12340", 00:11:42.347 "firmware_revision": "8.0.0", 00:11:42.347 "subnqn": "nqn.2019-08.org.qemu:12340", 00:11:42.347 "oacs": { 00:11:42.347 "security": 0, 00:11:42.347 "format": 1, 00:11:42.347 "firmware": 0, 00:11:42.347 "ns_manage": 1 00:11:42.347 }, 00:11:42.347 "multi_ctrlr": false, 00:11:42.347 "ana_reporting": false 00:11:42.347 }, 00:11:42.347 "vs": { 00:11:42.347 "nvme_version": "1.4" 00:11:42.347 }, 00:11:42.347 "ns_data": { 00:11:42.347 "id": 1, 00:11:42.347 "can_share": false 00:11:42.347 } 00:11:42.347 } 00:11:42.347 ], 00:11:42.347 "mp_policy": "active_passive" 00:11:42.347 } 00:11:42.347 } 00:11:42.347 ] 00:11:42.347 00:15:56 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:42.347 00:15:56 sw_hotplug -- common/autotest_common.sh@903 -- # return 0 00:11:42.347 00:15:56 sw_hotplug -- nvme/sw_hotplug.sh@103 -- # for dev in "${!nvmes[@]}" 00:11:42.347 00:15:56 sw_hotplug -- nvme/sw_hotplug.sh@104 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme01 -t PCIe -a 0000:00:11.0 00:11:42.347 00:15:56 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:42.347 00:15:56 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:42.347 Nvme01n1 00:11:42.347 00:15:56 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:42.347 00:15:56 sw_hotplug -- nvme/sw_hotplug.sh@105 -- # waitforbdev Nvme01n1 6 00:11:42.347 00:15:56 sw_hotplug -- common/autotest_common.sh@895 -- # local bdev_name=Nvme01n1 00:11:42.347 00:15:56 sw_hotplug -- common/autotest_common.sh@896 -- # local bdev_timeout=6 00:11:42.347 00:15:56 sw_hotplug -- common/autotest_common.sh@897 -- # local i 00:11:42.348 00:15:56 sw_hotplug -- common/autotest_common.sh@898 -- # [[ -z 6 ]] 00:11:42.348 00:15:56 sw_hotplug -- common/autotest_common.sh@900 -- # rpc_cmd bdev_wait_for_examine 00:11:42.348 00:15:56 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:42.348 00:15:56 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:42.348 00:15:56 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:42.348 00:15:56 sw_hotplug -- common/autotest_common.sh@902 -- # rpc_cmd bdev_get_bdevs -b Nvme01n1 -t 6 00:11:42.348 00:15:56 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:42.348 00:15:56 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:42.348 [ 00:11:42.348 { 00:11:42.348 "name": "Nvme01n1", 00:11:42.348 "aliases": [ 00:11:42.348 "65337edb-f981-4ac9-84d7-b77f05a3d0c1" 00:11:42.348 ], 00:11:42.348 "product_name": "NVMe disk", 00:11:42.348 "block_size": 4096, 00:11:42.348 "num_blocks": 1310720, 00:11:42.348 "uuid": "65337edb-f981-4ac9-84d7-b77f05a3d0c1", 00:11:42.348 "assigned_rate_limits": { 00:11:42.348 "rw_ios_per_sec": 0, 00:11:42.348 "rw_mbytes_per_sec": 0, 00:11:42.348 "r_mbytes_per_sec": 0, 00:11:42.348 "w_mbytes_per_sec": 0 00:11:42.348 }, 00:11:42.348 "claimed": false, 00:11:42.348 "zoned": false, 00:11:42.348 "supported_io_types": { 00:11:42.348 "read": true, 00:11:42.348 "write": true, 00:11:42.348 "unmap": true, 00:11:42.348 "write_zeroes": true, 00:11:42.348 "flush": true, 00:11:42.348 "reset": true, 00:11:42.348 "compare": true, 00:11:42.348 "compare_and_write": false, 00:11:42.348 "abort": true, 00:11:42.348 "nvme_admin": true, 00:11:42.348 "nvme_io": true 00:11:42.348 }, 00:11:42.348 "driver_specific": { 00:11:42.348 "nvme": [ 00:11:42.348 { 00:11:42.348 "pci_address": "0000:00:11.0", 00:11:42.348 "trid": { 00:11:42.348 "trtype": "PCIe", 00:11:42.348 "traddr": "0000:00:11.0" 00:11:42.348 }, 00:11:42.348 "ctrlr_data": { 00:11:42.348 "cntlid": 0, 00:11:42.348 "vendor_id": "0x1b36", 00:11:42.348 "model_number": "QEMU NVMe Ctrl", 00:11:42.348 "serial_number": "12341", 00:11:42.348 "firmware_revision": "8.0.0", 00:11:42.348 "subnqn": "nqn.2019-08.org.qemu:12341", 00:11:42.348 "oacs": { 00:11:42.348 "security": 0, 00:11:42.348 "format": 1, 00:11:42.348 "firmware": 0, 00:11:42.348 "ns_manage": 1 00:11:42.348 }, 00:11:42.348 "multi_ctrlr": false, 00:11:42.348 "ana_reporting": false 00:11:42.348 }, 00:11:42.348 "vs": { 00:11:42.348 "nvme_version": "1.4" 00:11:42.348 }, 00:11:42.348 "ns_data": { 00:11:42.348 "id": 1, 00:11:42.348 "can_share": false 00:11:42.348 } 00:11:42.348 } 00:11:42.348 ], 00:11:42.348 "mp_policy": "active_passive" 00:11:42.348 } 00:11:42.348 } 00:11:42.348 ] 00:11:42.348 00:15:56 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:42.348 00:15:56 sw_hotplug -- common/autotest_common.sh@903 -- # return 0 00:11:42.348 00:15:56 sw_hotplug -- nvme/sw_hotplug.sh@108 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:11:42.348 00:15:56 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:42.348 00:15:56 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:42.348 00:15:56 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:42.348 00:15:56 sw_hotplug -- nvme/sw_hotplug.sh@110 -- # debug_remove_attach_helper 3 6 true 00:11:42.348 00:15:56 sw_hotplug -- nvme/sw_hotplug.sh@14 -- # local helper_time=0 00:11:42.348 00:15:56 sw_hotplug -- nvme/sw_hotplug.sh@16 -- # timing_cmd remove_attach_helper 3 6 true 00:11:42.348 00:15:56 sw_hotplug -- common/autotest_common.sh@706 -- # [[ -t 0 ]] 00:11:42.348 00:15:56 sw_hotplug -- common/autotest_common.sh@706 -- # exec 00:11:42.348 00:15:56 sw_hotplug -- common/autotest_common.sh@708 -- # local time=0 TIMEFORMAT=%2R 00:11:42.348 00:15:56 sw_hotplug -- common/autotest_common.sh@714 -- # remove_attach_helper 3 6 true 00:11:42.348 00:15:56 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # local hotplug_events=3 00:11:42.348 00:15:56 sw_hotplug -- nvme/sw_hotplug.sh@23 -- # local hotplug_wait=6 00:11:42.348 00:15:56 sw_hotplug -- nvme/sw_hotplug.sh@24 -- # local use_bdev=true 00:11:42.348 00:15:56 sw_hotplug -- nvme/sw_hotplug.sh@25 -- # local dev bdfs 00:11:42.348 00:15:56 sw_hotplug -- nvme/sw_hotplug.sh@31 -- # sleep 6 00:11:48.915 00:16:02 sw_hotplug -- nvme/sw_hotplug.sh@33 -- # (( hotplug_events-- )) 00:11:48.915 00:16:02 sw_hotplug -- nvme/sw_hotplug.sh@34 -- # for dev in "${nvmes[@]}" 00:11:48.915 00:16:02 sw_hotplug -- nvme/sw_hotplug.sh@35 -- # echo 1 00:11:48.915 00:16:02 sw_hotplug -- nvme/sw_hotplug.sh@34 -- # for dev in "${nvmes[@]}" 00:11:48.915 00:16:02 sw_hotplug -- nvme/sw_hotplug.sh@35 -- # echo 1 00:11:48.915 00:16:02 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # true 00:11:48.915 00:16:02 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # sleep 6 00:11:48.915 [2024-07-23 00:16:03.009836] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:11:48.915 [2024-07-23 00:16:03.012118] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:48.915 [2024-07-23 00:16:03.012275] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:48.915 [2024-07-23 00:16:03.012384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:48.915 [2024-07-23 00:16:03.012478] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:48.915 [2024-07-23 00:16:03.012495] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:48.915 [2024-07-23 00:16:03.012511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:48.915 [2024-07-23 00:16:03.012525] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:48.915 [2024-07-23 00:16:03.012542] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:48.915 [2024-07-23 00:16:03.012554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:48.915 [2024-07-23 00:16:03.012570] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:48.915 [2024-07-23 00:16:03.012581] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:48.915 [2024-07-23 00:16:03.012596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:48.915 [2024-07-23 00:16:03.409224] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:11:48.915 [2024-07-23 00:16:03.411273] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:48.916 [2024-07-23 00:16:03.411324] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:48.916 [2024-07-23 00:16:03.411360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:48.916 [2024-07-23 00:16:03.411380] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:48.916 [2024-07-23 00:16:03.411394] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:48.916 [2024-07-23 00:16:03.411406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:48.916 [2024-07-23 00:16:03.411420] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:48.916 [2024-07-23 00:16:03.411433] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:48.916 [2024-07-23 00:16:03.411458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:48.916 [2024-07-23 00:16:03.411469] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:48.916 [2024-07-23 00:16:03.411485] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:48.916 [2024-07-23 00:16:03.411496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:55.507 00:16:09 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # rpc_cmd bdev_get_bdevs 00:11:55.507 00:16:09 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:55.507 00:16:09 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # jq length 00:11:55.507 00:16:09 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:55.507 00:16:09 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:55.507 00:16:09 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # (( 4 == 0 )) 00:11:55.507 00:16:09 sw_hotplug -- nvme/sw_hotplug.sh@41 -- # return 1 00:11:55.507 00:16:09 sw_hotplug -- common/autotest_common.sh@714 -- # trap - ERR 00:11:55.507 00:16:09 sw_hotplug -- common/autotest_common.sh@714 -- # print_backtrace 00:11:55.507 00:16:09 sw_hotplug -- common/autotest_common.sh@1149 -- # [[ hxBET =~ e ]] 00:11:55.507 00:16:09 sw_hotplug -- common/autotest_common.sh@1149 -- # return 0 00:11:55.507 00:16:09 sw_hotplug -- common/autotest_common.sh@714 -- # time=12.13 00:11:55.507 00:16:09 sw_hotplug -- common/autotest_common.sh@714 -- # trap - ERR 00:11:55.507 00:16:09 sw_hotplug -- common/autotest_common.sh@714 -- # print_backtrace 00:11:55.507 00:16:09 sw_hotplug -- common/autotest_common.sh@1149 -- # [[ hxBET =~ e ]] 00:11:55.508 00:16:09 sw_hotplug -- common/autotest_common.sh@1149 -- # return 0 00:11:55.508 00:16:09 sw_hotplug -- common/autotest_common.sh@716 -- # echo 12.13 00:11:55.508 00:16:09 sw_hotplug -- nvme/sw_hotplug.sh@16 -- # helper_time=12.13 00:11:55.508 00:16:09 sw_hotplug -- nvme/sw_hotplug.sh@17 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 12.13 2 00:11:55.508 remove_attach_helper took 12.13s to complete (handling 2 nvme drive(s)) 00:16:09 sw_hotplug -- nvme/sw_hotplug.sh@112 -- # rpc_cmd bdev_nvme_set_hotplug -d 00:11:55.508 00:16:09 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:55.508 00:16:09 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:55.508 00:16:09 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:55.508 00:16:09 sw_hotplug -- nvme/sw_hotplug.sh@113 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:11:55.508 00:16:09 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:55.508 00:16:09 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:55.508 00:16:09 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:55.508 00:16:09 sw_hotplug -- nvme/sw_hotplug.sh@115 -- # debug_remove_attach_helper 3 6 true 00:11:55.508 00:16:09 sw_hotplug -- nvme/sw_hotplug.sh@14 -- # local helper_time=0 00:11:55.508 00:16:09 sw_hotplug -- nvme/sw_hotplug.sh@16 -- # timing_cmd remove_attach_helper 3 6 true 00:11:55.508 00:16:09 sw_hotplug -- common/autotest_common.sh@706 -- # [[ -t 0 ]] 00:11:55.508 00:16:09 sw_hotplug -- common/autotest_common.sh@706 -- # exec 00:11:55.508 00:16:09 sw_hotplug -- common/autotest_common.sh@708 -- # local time=0 TIMEFORMAT=%2R 00:11:55.508 00:16:09 sw_hotplug -- common/autotest_common.sh@714 -- # remove_attach_helper 3 6 true 00:11:55.508 00:16:09 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # local hotplug_events=3 00:11:55.508 00:16:09 sw_hotplug -- nvme/sw_hotplug.sh@23 -- # local hotplug_wait=6 00:11:55.508 00:16:09 sw_hotplug -- nvme/sw_hotplug.sh@24 -- # local use_bdev=true 00:11:55.508 00:16:09 sw_hotplug -- nvme/sw_hotplug.sh@25 -- # local dev bdfs 00:11:55.508 00:16:09 sw_hotplug -- nvme/sw_hotplug.sh@31 -- # sleep 6 00:12:00.780 00:16:15 sw_hotplug -- nvme/sw_hotplug.sh@33 -- # (( hotplug_events-- )) 00:12:00.780 00:16:15 sw_hotplug -- nvme/sw_hotplug.sh@34 -- # for dev in "${nvmes[@]}" 00:12:00.780 00:16:15 sw_hotplug -- nvme/sw_hotplug.sh@35 -- # echo 1 00:12:00.780 00:16:15 sw_hotplug -- nvme/sw_hotplug.sh@35 -- # trap - ERR 00:12:00.780 00:16:15 sw_hotplug -- nvme/sw_hotplug.sh@35 -- # print_backtrace 00:12:00.780 00:16:15 sw_hotplug -- common/autotest_common.sh@1149 -- # [[ hxBET =~ e ]] 00:12:00.780 00:16:15 sw_hotplug -- common/autotest_common.sh@1149 -- # return 0 00:12:00.780 00:16:15 sw_hotplug -- nvme/sw_hotplug.sh@34 -- # for dev in "${nvmes[@]}" 00:12:00.780 00:16:15 sw_hotplug -- nvme/sw_hotplug.sh@35 -- # echo 1 00:12:00.780 00:16:15 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # true 00:12:00.780 00:16:15 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # sleep 6 00:12:07.400 00:16:21 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # rpc_cmd bdev_get_bdevs 00:12:07.400 00:16:21 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # jq length 00:12:07.400 00:16:21 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:07.400 00:16:21 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:07.400 00:16:21 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:07.400 00:16:21 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # (( 4 == 0 )) 00:12:07.400 00:16:21 sw_hotplug -- nvme/sw_hotplug.sh@41 -- # return 1 00:12:07.400 00:16:21 sw_hotplug -- common/autotest_common.sh@714 -- # time=12.08 00:12:07.400 00:16:21 sw_hotplug -- common/autotest_common.sh@714 -- # trap - ERR 00:12:07.400 00:16:21 sw_hotplug -- common/autotest_common.sh@714 -- # print_backtrace 00:12:07.400 00:16:21 sw_hotplug -- common/autotest_common.sh@1149 -- # [[ hxBET =~ e ]] 00:12:07.400 00:16:21 sw_hotplug -- common/autotest_common.sh@1149 -- # return 0 00:12:07.400 00:16:21 sw_hotplug -- common/autotest_common.sh@716 -- # echo 12.08 00:12:07.400 00:16:21 sw_hotplug -- nvme/sw_hotplug.sh@16 -- # helper_time=12.08 00:12:07.400 00:16:21 sw_hotplug -- nvme/sw_hotplug.sh@17 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 12.08 2 00:12:07.400 remove_attach_helper took 12.08s to complete (handling 2 nvme drive(s)) 00:16:21 sw_hotplug -- nvme/sw_hotplug.sh@117 -- # trap - SIGINT SIGTERM EXIT 00:12:07.400 00:16:21 sw_hotplug -- nvme/sw_hotplug.sh@118 -- # killprocess 83878 00:12:07.400 00:16:21 sw_hotplug -- common/autotest_common.sh@946 -- # '[' -z 83878 ']' 00:12:07.400 00:16:21 sw_hotplug -- common/autotest_common.sh@950 -- # kill -0 83878 00:12:07.400 00:16:21 sw_hotplug -- common/autotest_common.sh@951 -- # uname 00:12:07.400 00:16:21 sw_hotplug -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:12:07.400 00:16:21 sw_hotplug -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 83878 00:12:07.400 killing process with pid 83878 00:12:07.400 00:16:21 sw_hotplug -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:12:07.400 00:16:21 sw_hotplug -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:12:07.400 00:16:21 sw_hotplug -- common/autotest_common.sh@964 -- # echo 'killing process with pid 83878' 00:12:07.400 00:16:21 sw_hotplug -- common/autotest_common.sh@965 -- # kill 83878 00:12:07.400 00:16:21 sw_hotplug -- common/autotest_common.sh@970 -- # wait 83878 00:12:07.400 ************************************ 00:12:07.400 END TEST sw_hotplug 00:12:07.400 ************************************ 00:12:07.400 00:12:07.400 real 1m16.166s 00:12:07.400 user 0m43.927s 00:12:07.400 sys 0m15.415s 00:12:07.400 00:16:21 sw_hotplug -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:07.400 00:16:21 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:07.400 00:16:21 -- spdk/autotest.sh@247 -- # [[ 1 -eq 1 ]] 00:12:07.400 00:16:21 -- spdk/autotest.sh@248 -- # run_test nvme_xnvme /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:12:07.400 00:16:21 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:12:07.400 00:16:21 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:07.400 00:16:21 -- common/autotest_common.sh@10 -- # set +x 00:12:07.400 ************************************ 00:12:07.400 START TEST nvme_xnvme 00:12:07.400 ************************************ 00:12:07.400 00:16:21 nvme_xnvme -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:12:07.400 * Looking for test storage... 00:12:07.400 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:12:07.400 00:16:21 nvme_xnvme -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:07.400 00:16:21 nvme_xnvme -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:07.400 00:16:21 nvme_xnvme -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:07.400 00:16:21 nvme_xnvme -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:07.401 00:16:21 nvme_xnvme -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:07.401 00:16:21 nvme_xnvme -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:07.401 00:16:21 nvme_xnvme -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:07.401 00:16:21 nvme_xnvme -- paths/export.sh@5 -- # export PATH 00:12:07.401 00:16:21 nvme_xnvme -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:07.401 00:16:21 nvme_xnvme -- xnvme/xnvme.sh@85 -- # run_test xnvme_to_malloc_dd_copy malloc_to_xnvme_copy 00:12:07.401 00:16:21 nvme_xnvme -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:12:07.401 00:16:21 nvme_xnvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:07.401 00:16:21 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:07.401 ************************************ 00:12:07.401 START TEST xnvme_to_malloc_dd_copy 00:12:07.401 ************************************ 00:12:07.401 00:16:21 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@1121 -- # malloc_to_xnvme_copy 00:12:07.401 00:16:21 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@14 -- # init_null_blk gb=1 00:12:07.401 00:16:21 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@190 -- # [[ -e /sys/module/null_blk ]] 00:12:07.401 00:16:21 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@190 -- # modprobe null_blk gb=1 00:12:07.401 00:16:21 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@191 -- # return 00:12:07.401 00:16:21 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@16 -- # local mbdev0=malloc0 mbdev0_bs=512 00:12:07.401 00:16:21 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@17 -- # xnvme_io=() 00:12:07.401 00:16:21 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@17 -- # local xnvme0=null0 xnvme0_dev xnvme_io 00:12:07.401 00:16:21 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@18 -- # local io 00:12:07.401 00:16:21 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@20 -- # xnvme_io+=(libaio) 00:12:07.401 00:16:21 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@21 -- # xnvme_io+=(io_uring) 00:12:07.401 00:16:21 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@25 -- # mbdev0_b=2097152 00:12:07.401 00:16:21 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@26 -- # xnvme0_dev=/dev/nullb0 00:12:07.401 00:16:21 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@28 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='2097152' ['block_size']='512') 00:12:07.401 00:16:21 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@28 -- # local -A method_bdev_malloc_create_0 00:12:07.401 00:16:21 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@34 -- # method_bdev_xnvme_create_0=() 00:12:07.401 00:16:21 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@34 -- # local -A method_bdev_xnvme_create_0 00:12:07.401 00:16:21 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@35 -- # method_bdev_xnvme_create_0["name"]=null0 00:12:07.401 00:16:21 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@36 -- # method_bdev_xnvme_create_0["filename"]=/dev/nullb0 00:12:07.401 00:16:21 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@38 -- # for io in "${xnvme_io[@]}" 00:12:07.401 00:16:21 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@39 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:12:07.401 00:16:21 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=null0 --json /dev/fd/62 00:12:07.401 00:16:21 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # gen_conf 00:12:07.401 00:16:21 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:12:07.401 00:16:21 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:12:07.401 { 00:12:07.401 "subsystems": [ 00:12:07.401 { 00:12:07.401 "subsystem": "bdev", 00:12:07.401 "config": [ 00:12:07.401 { 00:12:07.401 "params": { 00:12:07.401 "block_size": 512, 00:12:07.401 "num_blocks": 2097152, 00:12:07.401 "name": "malloc0" 00:12:07.401 }, 00:12:07.401 "method": "bdev_malloc_create" 00:12:07.401 }, 00:12:07.401 { 00:12:07.401 "params": { 00:12:07.401 "io_mechanism": "libaio", 00:12:07.401 "filename": "/dev/nullb0", 00:12:07.401 "name": "null0" 00:12:07.401 }, 00:12:07.401 "method": "bdev_xnvme_create" 00:12:07.401 }, 00:12:07.401 { 00:12:07.401 "method": "bdev_wait_for_examine" 00:12:07.401 } 00:12:07.401 ] 00:12:07.401 } 00:12:07.401 ] 00:12:07.401 } 00:12:07.401 [2024-07-23 00:16:21.948488] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:12:07.401 [2024-07-23 00:16:21.948744] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84233 ] 00:12:07.661 [2024-07-23 00:16:22.099675] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:07.661 [2024-07-23 00:16:22.143656] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:12.670  Copying: 259/1024 [MB] (259 MBps) Copying: 506/1024 [MB] (246 MBps) Copying: 753/1024 [MB] (246 MBps) Copying: 999/1024 [MB] (246 MBps) Copying: 1024/1024 [MB] (average 250 MBps) 00:12:12.670 00:12:12.670 00:16:27 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # gen_conf 00:12:12.670 00:16:27 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=null0 --ob=malloc0 --json /dev/fd/62 00:12:12.670 00:16:27 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:12:12.670 00:16:27 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:12:12.670 { 00:12:12.670 "subsystems": [ 00:12:12.670 { 00:12:12.670 "subsystem": "bdev", 00:12:12.670 "config": [ 00:12:12.670 { 00:12:12.670 "params": { 00:12:12.670 "block_size": 512, 00:12:12.670 "num_blocks": 2097152, 00:12:12.670 "name": "malloc0" 00:12:12.670 }, 00:12:12.670 "method": "bdev_malloc_create" 00:12:12.670 }, 00:12:12.670 { 00:12:12.670 "params": { 00:12:12.670 "io_mechanism": "libaio", 00:12:12.670 "filename": "/dev/nullb0", 00:12:12.670 "name": "null0" 00:12:12.670 }, 00:12:12.670 "method": "bdev_xnvme_create" 00:12:12.670 }, 00:12:12.670 { 00:12:12.670 "method": "bdev_wait_for_examine" 00:12:12.670 } 00:12:12.670 ] 00:12:12.670 } 00:12:12.670 ] 00:12:12.670 } 00:12:12.670 [2024-07-23 00:16:27.189351] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:12:12.670 [2024-07-23 00:16:27.189496] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84298 ] 00:12:12.670 [2024-07-23 00:16:27.340239] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:12.929 [2024-07-23 00:16:27.382745] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:17.685  Copying: 265/1024 [MB] (265 MBps) Copying: 530/1024 [MB] (265 MBps) Copying: 793/1024 [MB] (262 MBps) Copying: 1024/1024 [MB] (average 264 MBps) 00:12:17.685 00:12:17.685 00:16:32 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@38 -- # for io in "${xnvme_io[@]}" 00:12:17.685 00:16:32 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@39 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:12:17.685 00:16:32 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=null0 --json /dev/fd/62 00:12:17.685 00:16:32 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # gen_conf 00:12:17.685 00:16:32 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:12:17.685 00:16:32 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:12:17.685 { 00:12:17.685 "subsystems": [ 00:12:17.685 { 00:12:17.685 "subsystem": "bdev", 00:12:17.685 "config": [ 00:12:17.685 { 00:12:17.685 "params": { 00:12:17.685 "block_size": 512, 00:12:17.685 "num_blocks": 2097152, 00:12:17.685 "name": "malloc0" 00:12:17.685 }, 00:12:17.685 "method": "bdev_malloc_create" 00:12:17.685 }, 00:12:17.685 { 00:12:17.685 "params": { 00:12:17.685 "io_mechanism": "io_uring", 00:12:17.685 "filename": "/dev/nullb0", 00:12:17.685 "name": "null0" 00:12:17.685 }, 00:12:17.685 "method": "bdev_xnvme_create" 00:12:17.685 }, 00:12:17.685 { 00:12:17.685 "method": "bdev_wait_for_examine" 00:12:17.685 } 00:12:17.685 ] 00:12:17.685 } 00:12:17.685 ] 00:12:17.685 } 00:12:17.685 [2024-07-23 00:16:32.219293] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:12:17.685 [2024-07-23 00:16:32.219458] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84359 ] 00:12:17.944 [2024-07-23 00:16:32.370176] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:17.944 [2024-07-23 00:16:32.412553] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:22.393  Copying: 270/1024 [MB] (270 MBps) Copying: 543/1024 [MB] (273 MBps) Copying: 818/1024 [MB] (274 MBps) Copying: 1024/1024 [MB] (average 273 MBps) 00:12:22.393 00:12:22.393 00:16:37 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=null0 --ob=malloc0 --json /dev/fd/62 00:12:22.393 00:16:37 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # gen_conf 00:12:22.393 00:16:37 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:12:22.393 00:16:37 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:12:22.393 { 00:12:22.393 "subsystems": [ 00:12:22.393 { 00:12:22.393 "subsystem": "bdev", 00:12:22.393 "config": [ 00:12:22.393 { 00:12:22.393 "params": { 00:12:22.393 "block_size": 512, 00:12:22.393 "num_blocks": 2097152, 00:12:22.393 "name": "malloc0" 00:12:22.393 }, 00:12:22.393 "method": "bdev_malloc_create" 00:12:22.393 }, 00:12:22.393 { 00:12:22.393 "params": { 00:12:22.393 "io_mechanism": "io_uring", 00:12:22.393 "filename": "/dev/nullb0", 00:12:22.393 "name": "null0" 00:12:22.393 }, 00:12:22.393 "method": "bdev_xnvme_create" 00:12:22.393 }, 00:12:22.393 { 00:12:22.393 "method": "bdev_wait_for_examine" 00:12:22.393 } 00:12:22.393 ] 00:12:22.393 } 00:12:22.393 ] 00:12:22.393 } 00:12:22.652 [2024-07-23 00:16:37.090611] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:12:22.652 [2024-07-23 00:16:37.090725] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84422 ] 00:12:22.652 [2024-07-23 00:16:37.239998] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:22.652 [2024-07-23 00:16:37.281975] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:27.100  Copying: 280/1024 [MB] (280 MBps) Copying: 559/1024 [MB] (279 MBps) Copying: 840/1024 [MB] (280 MBps) Copying: 1024/1024 [MB] (average 280 MBps) 00:12:27.100 00:12:27.100 00:16:41 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@52 -- # remove_null_blk 00:12:27.100 00:16:41 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@195 -- # modprobe -r null_blk 00:12:27.359 00:12:27.359 real 0m19.995s 00:12:27.359 user 0m15.665s 00:12:27.359 sys 0m3.892s 00:12:27.359 00:16:41 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:27.359 00:16:41 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:12:27.359 ************************************ 00:12:27.359 END TEST xnvme_to_malloc_dd_copy 00:12:27.359 ************************************ 00:12:27.359 00:16:41 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:12:27.359 00:16:41 nvme_xnvme -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:12:27.359 00:16:41 nvme_xnvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:27.359 00:16:41 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:27.359 ************************************ 00:12:27.359 START TEST xnvme_bdevperf 00:12:27.359 ************************************ 00:12:27.359 00:16:41 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1121 -- # xnvme_bdevperf 00:12:27.359 00:16:41 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@57 -- # init_null_blk gb=1 00:12:27.360 00:16:41 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@190 -- # [[ -e /sys/module/null_blk ]] 00:12:27.360 00:16:41 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@190 -- # modprobe null_blk gb=1 00:12:27.360 00:16:41 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@191 -- # return 00:12:27.360 00:16:41 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@59 -- # xnvme_io=() 00:12:27.360 00:16:41 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@59 -- # local xnvme0=null0 xnvme0_dev xnvme_io 00:12:27.360 00:16:41 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@60 -- # local io 00:12:27.360 00:16:41 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@62 -- # xnvme_io+=(libaio) 00:12:27.360 00:16:41 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@63 -- # xnvme_io+=(io_uring) 00:12:27.360 00:16:41 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@65 -- # xnvme0_dev=/dev/nullb0 00:12:27.360 00:16:41 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@67 -- # method_bdev_xnvme_create_0=() 00:12:27.360 00:16:41 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@67 -- # local -A method_bdev_xnvme_create_0 00:12:27.360 00:16:41 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@68 -- # method_bdev_xnvme_create_0["name"]=null0 00:12:27.360 00:16:41 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@69 -- # method_bdev_xnvme_create_0["filename"]=/dev/nullb0 00:12:27.360 00:16:41 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@71 -- # for io in "${xnvme_io[@]}" 00:12:27.360 00:16:41 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@72 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:12:27.360 00:16:41 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T null0 -o 4096 00:12:27.360 00:16:41 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # gen_conf 00:12:27.360 00:16:41 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:12:27.360 00:16:41 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:12:27.360 { 00:12:27.360 "subsystems": [ 00:12:27.360 { 00:12:27.360 "subsystem": "bdev", 00:12:27.360 "config": [ 00:12:27.360 { 00:12:27.360 "params": { 00:12:27.360 "io_mechanism": "libaio", 00:12:27.360 "filename": "/dev/nullb0", 00:12:27.360 "name": "null0" 00:12:27.360 }, 00:12:27.360 "method": "bdev_xnvme_create" 00:12:27.360 }, 00:12:27.360 { 00:12:27.360 "method": "bdev_wait_for_examine" 00:12:27.360 } 00:12:27.360 ] 00:12:27.360 } 00:12:27.360 ] 00:12:27.360 } 00:12:27.360 [2024-07-23 00:16:42.000804] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:12:27.360 [2024-07-23 00:16:42.000929] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84500 ] 00:12:27.619 [2024-07-23 00:16:42.151568] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:27.619 [2024-07-23 00:16:42.193342] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:27.619 Running I/O for 5 seconds... 00:12:32.891 00:12:32.891 Latency(us) 00:12:32.891 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:32.891 Job: null0 (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:12:32.891 null0 : 5.00 160340.22 626.33 0.00 0.00 396.69 125.84 1269.92 00:12:32.891 =================================================================================================================== 00:12:32.891 Total : 160340.22 626.33 0.00 0.00 396.69 125.84 1269.92 00:12:32.891 00:16:47 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@71 -- # for io in "${xnvme_io[@]}" 00:12:32.891 00:16:47 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@72 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:12:32.891 00:16:47 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T null0 -o 4096 00:12:32.891 00:16:47 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # gen_conf 00:12:32.891 00:16:47 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:12:32.891 00:16:47 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:12:33.150 { 00:12:33.150 "subsystems": [ 00:12:33.150 { 00:12:33.150 "subsystem": "bdev", 00:12:33.150 "config": [ 00:12:33.150 { 00:12:33.150 "params": { 00:12:33.150 "io_mechanism": "io_uring", 00:12:33.150 "filename": "/dev/nullb0", 00:12:33.150 "name": "null0" 00:12:33.150 }, 00:12:33.150 "method": "bdev_xnvme_create" 00:12:33.150 }, 00:12:33.150 { 00:12:33.150 "method": "bdev_wait_for_examine" 00:12:33.150 } 00:12:33.150 ] 00:12:33.150 } 00:12:33.150 ] 00:12:33.150 } 00:12:33.150 [2024-07-23 00:16:47.618686] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:12:33.150 [2024-07-23 00:16:47.618829] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84569 ] 00:12:33.150 [2024-07-23 00:16:47.769285] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:33.150 [2024-07-23 00:16:47.812663] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:33.408 Running I/O for 5 seconds... 00:12:38.680 00:12:38.680 Latency(us) 00:12:38.680 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:38.680 Job: null0 (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:12:38.680 null0 : 5.00 204534.06 798.96 0.00 0.00 310.55 203.98 1085.69 00:12:38.680 =================================================================================================================== 00:12:38.680 Total : 204534.06 798.96 0.00 0.00 310.55 203.98 1085.69 00:12:38.680 00:16:53 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@82 -- # remove_null_blk 00:12:38.680 00:16:53 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@195 -- # modprobe -r null_blk 00:12:38.680 00:12:38.680 real 0m11.267s 00:12:38.680 user 0m8.080s 00:12:38.680 sys 0m2.986s 00:12:38.680 00:16:53 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:38.680 00:16:53 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:12:38.680 ************************************ 00:12:38.680 END TEST xnvme_bdevperf 00:12:38.680 ************************************ 00:12:38.680 ************************************ 00:12:38.680 END TEST nvme_xnvme 00:12:38.680 ************************************ 00:12:38.680 00:12:38.680 real 0m31.533s 00:12:38.680 user 0m23.848s 00:12:38.680 sys 0m7.056s 00:12:38.680 00:16:53 nvme_xnvme -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:38.680 00:16:53 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:38.680 00:16:53 -- spdk/autotest.sh@249 -- # run_test blockdev_xnvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:12:38.681 00:16:53 -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:12:38.681 00:16:53 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:38.681 00:16:53 -- common/autotest_common.sh@10 -- # set +x 00:12:38.681 ************************************ 00:12:38.681 START TEST blockdev_xnvme 00:12:38.681 ************************************ 00:12:38.681 00:16:53 blockdev_xnvme -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:12:38.940 * Looking for test storage... 00:12:38.940 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:12:38.940 00:16:53 blockdev_xnvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:12:38.940 00:16:53 blockdev_xnvme -- bdev/nbd_common.sh@6 -- # set -e 00:12:38.940 00:16:53 blockdev_xnvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:12:38.940 00:16:53 blockdev_xnvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:12:38.940 00:16:53 blockdev_xnvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:12:38.940 00:16:53 blockdev_xnvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:12:38.940 00:16:53 blockdev_xnvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:12:38.940 00:16:53 blockdev_xnvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:12:38.940 00:16:53 blockdev_xnvme -- bdev/blockdev.sh@20 -- # : 00:12:38.940 00:16:53 blockdev_xnvme -- bdev/blockdev.sh@670 -- # QOS_DEV_1=Malloc_0 00:12:38.940 00:16:53 blockdev_xnvme -- bdev/blockdev.sh@671 -- # QOS_DEV_2=Null_1 00:12:38.940 00:16:53 blockdev_xnvme -- bdev/blockdev.sh@672 -- # QOS_RUN_TIME=5 00:12:38.940 00:16:53 blockdev_xnvme -- bdev/blockdev.sh@674 -- # uname -s 00:12:38.940 00:16:53 blockdev_xnvme -- bdev/blockdev.sh@674 -- # '[' Linux = Linux ']' 00:12:38.940 00:16:53 blockdev_xnvme -- bdev/blockdev.sh@676 -- # PRE_RESERVED_MEM=0 00:12:38.940 00:16:53 blockdev_xnvme -- bdev/blockdev.sh@682 -- # test_type=xnvme 00:12:38.940 00:16:53 blockdev_xnvme -- bdev/blockdev.sh@683 -- # crypto_device= 00:12:38.940 00:16:53 blockdev_xnvme -- bdev/blockdev.sh@684 -- # dek= 00:12:38.940 00:16:53 blockdev_xnvme -- bdev/blockdev.sh@685 -- # env_ctx= 00:12:38.940 00:16:53 blockdev_xnvme -- bdev/blockdev.sh@686 -- # wait_for_rpc= 00:12:38.940 00:16:53 blockdev_xnvme -- bdev/blockdev.sh@687 -- # '[' -n '' ']' 00:12:38.940 00:16:53 blockdev_xnvme -- bdev/blockdev.sh@690 -- # [[ xnvme == bdev ]] 00:12:38.940 00:16:53 blockdev_xnvme -- bdev/blockdev.sh@690 -- # [[ xnvme == crypto_* ]] 00:12:38.940 00:16:53 blockdev_xnvme -- bdev/blockdev.sh@693 -- # start_spdk_tgt 00:12:38.940 00:16:53 blockdev_xnvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=84692 00:12:38.940 00:16:53 blockdev_xnvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:12:38.940 00:16:53 blockdev_xnvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:12:38.940 00:16:53 blockdev_xnvme -- bdev/blockdev.sh@49 -- # waitforlisten 84692 00:12:38.940 00:16:53 blockdev_xnvme -- common/autotest_common.sh@827 -- # '[' -z 84692 ']' 00:12:38.940 00:16:53 blockdev_xnvme -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:38.940 00:16:53 blockdev_xnvme -- common/autotest_common.sh@832 -- # local max_retries=100 00:12:38.940 00:16:53 blockdev_xnvme -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:38.940 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:38.940 00:16:53 blockdev_xnvme -- common/autotest_common.sh@836 -- # xtrace_disable 00:12:38.940 00:16:53 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:38.940 [2024-07-23 00:16:53.530886] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:12:38.940 [2024-07-23 00:16:53.531685] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84692 ] 00:12:39.199 [2024-07-23 00:16:53.681667] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:39.199 [2024-07-23 00:16:53.724859] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:39.767 00:16:54 blockdev_xnvme -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:12:39.767 00:16:54 blockdev_xnvme -- common/autotest_common.sh@860 -- # return 0 00:12:39.767 00:16:54 blockdev_xnvme -- bdev/blockdev.sh@694 -- # case "$test_type" in 00:12:39.767 00:16:54 blockdev_xnvme -- bdev/blockdev.sh@729 -- # setup_xnvme_conf 00:12:39.767 00:16:54 blockdev_xnvme -- bdev/blockdev.sh@88 -- # local io_mechanism=io_uring 00:12:39.767 00:16:54 blockdev_xnvme -- bdev/blockdev.sh@89 -- # local nvme nvmes 00:12:39.767 00:16:54 blockdev_xnvme -- bdev/blockdev.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:12:40.026 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:12:40.286 Waiting for block devices as requested 00:12:40.286 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:12:40.545 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:12:45.857 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:12:45.857 00:17:00 blockdev_xnvme -- bdev/blockdev.sh@92 -- # get_zoned_devs 00:12:45.857 00:17:00 blockdev_xnvme -- common/autotest_common.sh@1665 -- # zoned_devs=() 00:12:45.857 00:17:00 blockdev_xnvme -- common/autotest_common.sh@1665 -- # local -gA zoned_devs 00:12:45.857 00:17:00 blockdev_xnvme -- common/autotest_common.sh@1666 -- # local nvme bdf 00:12:45.857 00:17:00 blockdev_xnvme -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:12:45.857 00:17:00 blockdev_xnvme -- common/autotest_common.sh@1669 -- # is_block_zoned nvme0n1 00:12:45.857 00:17:00 blockdev_xnvme -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:12:45.857 00:17:00 blockdev_xnvme -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:12:45.857 00:17:00 blockdev_xnvme -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:12:45.857 00:17:00 blockdev_xnvme -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:12:45.857 00:17:00 blockdev_xnvme -- common/autotest_common.sh@1669 -- # is_block_zoned nvme0n2 00:12:45.857 00:17:00 blockdev_xnvme -- common/autotest_common.sh@1658 -- # local device=nvme0n2 00:12:45.857 00:17:00 blockdev_xnvme -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:12:45.857 00:17:00 blockdev_xnvme -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:12:45.857 00:17:00 blockdev_xnvme -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:12:45.857 00:17:00 blockdev_xnvme -- common/autotest_common.sh@1669 -- # is_block_zoned nvme0n3 00:12:45.857 00:17:00 blockdev_xnvme -- common/autotest_common.sh@1658 -- # local device=nvme0n3 00:12:45.857 00:17:00 blockdev_xnvme -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:12:45.857 00:17:00 blockdev_xnvme -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:12:45.857 00:17:00 blockdev_xnvme -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:12:45.857 00:17:00 blockdev_xnvme -- common/autotest_common.sh@1669 -- # is_block_zoned nvme1c1n1 00:12:45.857 00:17:00 blockdev_xnvme -- common/autotest_common.sh@1658 -- # local device=nvme1c1n1 00:12:45.857 00:17:00 blockdev_xnvme -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme1c1n1/queue/zoned ]] 00:12:45.857 00:17:00 blockdev_xnvme -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:12:45.857 00:17:00 blockdev_xnvme -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:12:45.857 00:17:00 blockdev_xnvme -- common/autotest_common.sh@1669 -- # is_block_zoned nvme1n1 00:12:45.857 00:17:00 blockdev_xnvme -- common/autotest_common.sh@1658 -- # local device=nvme1n1 00:12:45.857 00:17:00 blockdev_xnvme -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:12:45.857 00:17:00 blockdev_xnvme -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:12:45.857 00:17:00 blockdev_xnvme -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:12:45.857 00:17:00 blockdev_xnvme -- common/autotest_common.sh@1669 -- # is_block_zoned nvme2n1 00:12:45.857 00:17:00 blockdev_xnvme -- common/autotest_common.sh@1658 -- # local device=nvme2n1 00:12:45.857 00:17:00 blockdev_xnvme -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:12:45.857 00:17:00 blockdev_xnvme -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:12:45.857 00:17:00 blockdev_xnvme -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:12:45.857 00:17:00 blockdev_xnvme -- common/autotest_common.sh@1669 -- # is_block_zoned nvme3n1 00:12:45.857 00:17:00 blockdev_xnvme -- common/autotest_common.sh@1658 -- # local device=nvme3n1 00:12:45.857 00:17:00 blockdev_xnvme -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:12:45.857 00:17:00 blockdev_xnvme -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:12:45.857 00:17:00 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:12:45.857 00:17:00 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n1 ]] 00:12:45.857 00:17:00 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:12:45.857 00:17:00 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:12:45.857 00:17:00 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:12:45.857 00:17:00 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n2 ]] 00:12:45.857 00:17:00 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:12:45.857 00:17:00 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:12:45.857 00:17:00 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:12:45.857 00:17:00 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n3 ]] 00:12:45.857 00:17:00 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:12:45.857 00:17:00 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:12:45.858 00:17:00 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:12:45.858 00:17:00 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme1n1 ]] 00:12:45.858 00:17:00 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:12:45.858 00:17:00 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:12:45.858 00:17:00 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:12:45.858 00:17:00 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n1 ]] 00:12:45.858 00:17:00 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:12:45.858 00:17:00 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:12:45.858 00:17:00 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:12:45.858 00:17:00 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme3n1 ]] 00:12:45.858 00:17:00 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:12:45.858 00:17:00 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:12:45.858 00:17:00 blockdev_xnvme -- bdev/blockdev.sh@99 -- # (( 6 > 0 )) 00:12:45.858 00:17:00 blockdev_xnvme -- bdev/blockdev.sh@100 -- # rpc_cmd 00:12:45.858 00:17:00 blockdev_xnvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:45.858 00:17:00 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:45.858 00:17:00 blockdev_xnvme -- bdev/blockdev.sh@100 -- # printf '%s\n' 'bdev_xnvme_create /dev/nvme0n1 nvme0n1 io_uring' 'bdev_xnvme_create /dev/nvme0n2 nvme0n2 io_uring' 'bdev_xnvme_create /dev/nvme0n3 nvme0n3 io_uring' 'bdev_xnvme_create /dev/nvme1n1 nvme1n1 io_uring' 'bdev_xnvme_create /dev/nvme2n1 nvme2n1 io_uring' 'bdev_xnvme_create /dev/nvme3n1 nvme3n1 io_uring' 00:12:45.858 nvme0n1 00:12:45.858 nvme0n2 00:12:45.858 nvme0n3 00:12:45.858 nvme1n1 00:12:45.858 nvme2n1 00:12:45.858 nvme3n1 00:12:45.858 00:17:00 blockdev_xnvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:45.858 00:17:00 blockdev_xnvme -- bdev/blockdev.sh@737 -- # rpc_cmd bdev_wait_for_examine 00:12:45.858 00:17:00 blockdev_xnvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:45.858 00:17:00 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:45.858 00:17:00 blockdev_xnvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:45.858 00:17:00 blockdev_xnvme -- bdev/blockdev.sh@740 -- # cat 00:12:45.858 00:17:00 blockdev_xnvme -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n accel 00:12:45.858 00:17:00 blockdev_xnvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:45.858 00:17:00 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:45.858 00:17:00 blockdev_xnvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:45.858 00:17:00 blockdev_xnvme -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n bdev 00:12:45.858 00:17:00 blockdev_xnvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:45.858 00:17:00 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:45.858 00:17:00 blockdev_xnvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:45.858 00:17:00 blockdev_xnvme -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n iobuf 00:12:45.858 00:17:00 blockdev_xnvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:45.858 00:17:00 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:45.858 00:17:00 blockdev_xnvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:45.858 00:17:00 blockdev_xnvme -- bdev/blockdev.sh@748 -- # mapfile -t bdevs 00:12:45.858 00:17:00 blockdev_xnvme -- bdev/blockdev.sh@748 -- # jq -r '.[] | select(.claimed == false)' 00:12:45.858 00:17:00 blockdev_xnvme -- bdev/blockdev.sh@748 -- # rpc_cmd bdev_get_bdevs 00:12:45.858 00:17:00 blockdev_xnvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:45.858 00:17:00 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:45.858 00:17:00 blockdev_xnvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:45.858 00:17:00 blockdev_xnvme -- bdev/blockdev.sh@749 -- # mapfile -t bdevs_name 00:12:45.858 00:17:00 blockdev_xnvme -- bdev/blockdev.sh@749 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "43dfa4b9-cb63-44b7-992c-3e9a52fad3ab"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "43dfa4b9-cb63-44b7-992c-3e9a52fad3ab",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n2",' ' "aliases": [' ' "ee0b8adf-5aaf-4c66-ac07-7ec78073196b"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "ee0b8adf-5aaf-4c66-ac07-7ec78073196b",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n3",' ' "aliases": [' ' "637500fc-927e-46c3-8cb1-3bce0ca54f53"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "637500fc-927e-46c3-8cb1-3bce0ca54f53",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "92e7fc2f-7fb4-445a-9ec3-7576456ec4f5"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "92e7fc2f-7fb4-445a-9ec3-7576456ec4f5",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "6d31fd3c-c164-46d8-a7e3-ccfb32429015"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "6d31fd3c-c164-46d8-a7e3-ccfb32429015",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "665652b4-39e3-4911-a6e0-29034bc2eb44"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "665652b4-39e3-4911-a6e0-29034bc2eb44",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' 00:12:45.858 00:17:00 blockdev_xnvme -- bdev/blockdev.sh@749 -- # jq -r .name 00:12:45.858 00:17:00 blockdev_xnvme -- bdev/blockdev.sh@750 -- # bdev_list=("${bdevs_name[@]}") 00:12:45.858 00:17:00 blockdev_xnvme -- bdev/blockdev.sh@752 -- # hello_world_bdev=nvme0n1 00:12:45.858 00:17:00 blockdev_xnvme -- bdev/blockdev.sh@753 -- # trap - SIGINT SIGTERM EXIT 00:12:45.858 00:17:00 blockdev_xnvme -- bdev/blockdev.sh@754 -- # killprocess 84692 00:12:45.858 00:17:00 blockdev_xnvme -- common/autotest_common.sh@946 -- # '[' -z 84692 ']' 00:12:45.858 00:17:00 blockdev_xnvme -- common/autotest_common.sh@950 -- # kill -0 84692 00:12:45.858 00:17:00 blockdev_xnvme -- common/autotest_common.sh@951 -- # uname 00:12:45.858 00:17:00 blockdev_xnvme -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:12:45.858 00:17:00 blockdev_xnvme -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 84692 00:12:45.858 killing process with pid 84692 00:12:45.858 00:17:00 blockdev_xnvme -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:12:45.858 00:17:00 blockdev_xnvme -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:12:45.858 00:17:00 blockdev_xnvme -- common/autotest_common.sh@964 -- # echo 'killing process with pid 84692' 00:12:45.858 00:17:00 blockdev_xnvme -- common/autotest_common.sh@965 -- # kill 84692 00:12:45.858 00:17:00 blockdev_xnvme -- common/autotest_common.sh@970 -- # wait 84692 00:12:46.426 00:17:00 blockdev_xnvme -- bdev/blockdev.sh@758 -- # trap cleanup SIGINT SIGTERM EXIT 00:12:46.426 00:17:00 blockdev_xnvme -- bdev/blockdev.sh@760 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:12:46.426 00:17:00 blockdev_xnvme -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:12:46.426 00:17:00 blockdev_xnvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:46.426 00:17:00 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:46.426 ************************************ 00:12:46.426 START TEST bdev_hello_world 00:12:46.426 ************************************ 00:12:46.426 00:17:00 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:12:46.426 [2024-07-23 00:17:00.984994] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:12:46.426 [2024-07-23 00:17:00.985166] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84962 ] 00:12:46.686 [2024-07-23 00:17:01.137141] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:46.686 [2024-07-23 00:17:01.183131] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:46.686 [2024-07-23 00:17:01.360904] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:12:46.686 [2024-07-23 00:17:01.360961] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev nvme0n1 00:12:46.686 [2024-07-23 00:17:01.360992] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:12:46.686 [2024-07-23 00:17:01.363249] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:12:46.686 [2024-07-23 00:17:01.363686] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:12:46.686 [2024-07-23 00:17:01.363708] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:12:46.686 [2024-07-23 00:17:01.363980] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:12:46.686 00:12:46.686 [2024-07-23 00:17:01.364004] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:12:46.945 00:12:46.945 real 0m0.684s 00:12:46.945 user 0m0.376s 00:12:46.945 sys 0m0.198s 00:12:46.945 ************************************ 00:12:46.945 END TEST bdev_hello_world 00:12:46.945 ************************************ 00:12:46.945 00:17:01 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:46.945 00:17:01 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:12:47.204 00:17:01 blockdev_xnvme -- bdev/blockdev.sh@761 -- # run_test bdev_bounds bdev_bounds '' 00:12:47.204 00:17:01 blockdev_xnvme -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:12:47.204 00:17:01 blockdev_xnvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:47.204 00:17:01 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:47.204 ************************************ 00:12:47.204 START TEST bdev_bounds 00:12:47.204 ************************************ 00:12:47.204 00:17:01 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1121 -- # bdev_bounds '' 00:12:47.204 Process bdevio pid: 84987 00:12:47.204 00:17:01 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@290 -- # bdevio_pid=84987 00:12:47.204 00:17:01 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@291 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:12:47.204 00:17:01 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@289 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:12:47.204 00:17:01 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@292 -- # echo 'Process bdevio pid: 84987' 00:12:47.204 00:17:01 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@293 -- # waitforlisten 84987 00:12:47.204 00:17:01 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@827 -- # '[' -z 84987 ']' 00:12:47.204 00:17:01 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:47.204 00:17:01 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@832 -- # local max_retries=100 00:12:47.204 00:17:01 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:47.204 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:47.204 00:17:01 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@836 -- # xtrace_disable 00:12:47.204 00:17:01 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:12:47.204 [2024-07-23 00:17:01.740318] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:12:47.204 [2024-07-23 00:17:01.740442] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84987 ] 00:12:47.463 [2024-07-23 00:17:01.891957] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:47.463 [2024-07-23 00:17:01.941389] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:47.463 [2024-07-23 00:17:01.941471] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:47.463 [2024-07-23 00:17:01.941595] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:48.029 00:17:02 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:12:48.029 00:17:02 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@860 -- # return 0 00:12:48.029 00:17:02 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@294 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:12:48.029 I/O targets: 00:12:48.029 nvme0n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:12:48.029 nvme0n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:12:48.029 nvme0n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:12:48.029 nvme1n1: 262144 blocks of 4096 bytes (1024 MiB) 00:12:48.029 nvme2n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:12:48.029 nvme3n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:12:48.029 00:12:48.029 00:12:48.029 CUnit - A unit testing framework for C - Version 2.1-3 00:12:48.029 http://cunit.sourceforge.net/ 00:12:48.029 00:12:48.029 00:12:48.029 Suite: bdevio tests on: nvme3n1 00:12:48.029 Test: blockdev write read block ...passed 00:12:48.029 Test: blockdev write zeroes read block ...passed 00:12:48.029 Test: blockdev write zeroes read no split ...passed 00:12:48.029 Test: blockdev write zeroes read split ...passed 00:12:48.029 Test: blockdev write zeroes read split partial ...passed 00:12:48.029 Test: blockdev reset ...passed 00:12:48.029 Test: blockdev write read 8 blocks ...passed 00:12:48.029 Test: blockdev write read size > 128k ...passed 00:12:48.029 Test: blockdev write read invalid size ...passed 00:12:48.029 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:48.029 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:48.029 Test: blockdev write read max offset ...passed 00:12:48.029 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:48.029 Test: blockdev writev readv 8 blocks ...passed 00:12:48.029 Test: blockdev writev readv 30 x 1block ...passed 00:12:48.029 Test: blockdev writev readv block ...passed 00:12:48.029 Test: blockdev writev readv size > 128k ...passed 00:12:48.029 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:48.029 Test: blockdev comparev and writev ...passed 00:12:48.029 Test: blockdev nvme passthru rw ...passed 00:12:48.029 Test: blockdev nvme passthru vendor specific ...passed 00:12:48.029 Test: blockdev nvme admin passthru ...passed 00:12:48.029 Test: blockdev copy ...passed 00:12:48.029 Suite: bdevio tests on: nvme2n1 00:12:48.029 Test: blockdev write read block ...passed 00:12:48.029 Test: blockdev write zeroes read block ...passed 00:12:48.029 Test: blockdev write zeroes read no split ...passed 00:12:48.029 Test: blockdev write zeroes read split ...passed 00:12:48.029 Test: blockdev write zeroes read split partial ...passed 00:12:48.029 Test: blockdev reset ...passed 00:12:48.029 Test: blockdev write read 8 blocks ...passed 00:12:48.029 Test: blockdev write read size > 128k ...passed 00:12:48.029 Test: blockdev write read invalid size ...passed 00:12:48.029 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:48.029 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:48.029 Test: blockdev write read max offset ...passed 00:12:48.029 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:48.029 Test: blockdev writev readv 8 blocks ...passed 00:12:48.029 Test: blockdev writev readv 30 x 1block ...passed 00:12:48.029 Test: blockdev writev readv block ...passed 00:12:48.029 Test: blockdev writev readv size > 128k ...passed 00:12:48.029 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:48.029 Test: blockdev comparev and writev ...passed 00:12:48.029 Test: blockdev nvme passthru rw ...passed 00:12:48.029 Test: blockdev nvme passthru vendor specific ...passed 00:12:48.029 Test: blockdev nvme admin passthru ...passed 00:12:48.029 Test: blockdev copy ...passed 00:12:48.029 Suite: bdevio tests on: nvme1n1 00:12:48.029 Test: blockdev write read block ...passed 00:12:48.029 Test: blockdev write zeroes read block ...passed 00:12:48.029 Test: blockdev write zeroes read no split ...passed 00:12:48.029 Test: blockdev write zeroes read split ...passed 00:12:48.029 Test: blockdev write zeroes read split partial ...passed 00:12:48.029 Test: blockdev reset ...passed 00:12:48.029 Test: blockdev write read 8 blocks ...passed 00:12:48.029 Test: blockdev write read size > 128k ...passed 00:12:48.029 Test: blockdev write read invalid size ...passed 00:12:48.029 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:48.029 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:48.029 Test: blockdev write read max offset ...passed 00:12:48.029 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:48.029 Test: blockdev writev readv 8 blocks ...passed 00:12:48.029 Test: blockdev writev readv 30 x 1block ...passed 00:12:48.029 Test: blockdev writev readv block ...passed 00:12:48.029 Test: blockdev writev readv size > 128k ...passed 00:12:48.029 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:48.030 Test: blockdev comparev and writev ...passed 00:12:48.030 Test: blockdev nvme passthru rw ...passed 00:12:48.030 Test: blockdev nvme passthru vendor specific ...passed 00:12:48.030 Test: blockdev nvme admin passthru ...passed 00:12:48.030 Test: blockdev copy ...passed 00:12:48.030 Suite: bdevio tests on: nvme0n3 00:12:48.030 Test: blockdev write read block ...passed 00:12:48.030 Test: blockdev write zeroes read block ...passed 00:12:48.030 Test: blockdev write zeroes read no split ...passed 00:12:48.030 Test: blockdev write zeroes read split ...passed 00:12:48.030 Test: blockdev write zeroes read split partial ...passed 00:12:48.030 Test: blockdev reset ...passed 00:12:48.030 Test: blockdev write read 8 blocks ...passed 00:12:48.030 Test: blockdev write read size > 128k ...passed 00:12:48.030 Test: blockdev write read invalid size ...passed 00:12:48.030 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:48.030 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:48.030 Test: blockdev write read max offset ...passed 00:12:48.030 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:48.030 Test: blockdev writev readv 8 blocks ...passed 00:12:48.030 Test: blockdev writev readv 30 x 1block ...passed 00:12:48.030 Test: blockdev writev readv block ...passed 00:12:48.030 Test: blockdev writev readv size > 128k ...passed 00:12:48.030 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:48.030 Test: blockdev comparev and writev ...passed 00:12:48.030 Test: blockdev nvme passthru rw ...passed 00:12:48.030 Test: blockdev nvme passthru vendor specific ...passed 00:12:48.030 Test: blockdev nvme admin passthru ...passed 00:12:48.030 Test: blockdev copy ...passed 00:12:48.030 Suite: bdevio tests on: nvme0n2 00:12:48.030 Test: blockdev write read block ...passed 00:12:48.030 Test: blockdev write zeroes read block ...passed 00:12:48.288 Test: blockdev write zeroes read no split ...passed 00:12:48.288 Test: blockdev write zeroes read split ...passed 00:12:48.288 Test: blockdev write zeroes read split partial ...passed 00:12:48.288 Test: blockdev reset ...passed 00:12:48.288 Test: blockdev write read 8 blocks ...passed 00:12:48.288 Test: blockdev write read size > 128k ...passed 00:12:48.288 Test: blockdev write read invalid size ...passed 00:12:48.288 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:48.288 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:48.288 Test: blockdev write read max offset ...passed 00:12:48.288 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:48.288 Test: blockdev writev readv 8 blocks ...passed 00:12:48.288 Test: blockdev writev readv 30 x 1block ...passed 00:12:48.288 Test: blockdev writev readv block ...passed 00:12:48.288 Test: blockdev writev readv size > 128k ...passed 00:12:48.288 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:48.288 Test: blockdev comparev and writev ...passed 00:12:48.288 Test: blockdev nvme passthru rw ...passed 00:12:48.288 Test: blockdev nvme passthru vendor specific ...passed 00:12:48.288 Test: blockdev nvme admin passthru ...passed 00:12:48.288 Test: blockdev copy ...passed 00:12:48.288 Suite: bdevio tests on: nvme0n1 00:12:48.288 Test: blockdev write read block ...passed 00:12:48.288 Test: blockdev write zeroes read block ...passed 00:12:48.288 Test: blockdev write zeroes read no split ...passed 00:12:48.288 Test: blockdev write zeroes read split ...passed 00:12:48.288 Test: blockdev write zeroes read split partial ...passed 00:12:48.288 Test: blockdev reset ...passed 00:12:48.288 Test: blockdev write read 8 blocks ...passed 00:12:48.288 Test: blockdev write read size > 128k ...passed 00:12:48.288 Test: blockdev write read invalid size ...passed 00:12:48.288 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:48.288 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:48.288 Test: blockdev write read max offset ...passed 00:12:48.288 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:48.288 Test: blockdev writev readv 8 blocks ...passed 00:12:48.288 Test: blockdev writev readv 30 x 1block ...passed 00:12:48.288 Test: blockdev writev readv block ...passed 00:12:48.288 Test: blockdev writev readv size > 128k ...passed 00:12:48.288 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:48.288 Test: blockdev comparev and writev ...passed 00:12:48.288 Test: blockdev nvme passthru rw ...passed 00:12:48.288 Test: blockdev nvme passthru vendor specific ...passed 00:12:48.288 Test: blockdev nvme admin passthru ...passed 00:12:48.288 Test: blockdev copy ...passed 00:12:48.288 00:12:48.288 Run Summary: Type Total Ran Passed Failed Inactive 00:12:48.288 suites 6 6 n/a 0 0 00:12:48.288 tests 138 138 138 0 0 00:12:48.288 asserts 780 780 780 0 n/a 00:12:48.288 00:12:48.288 Elapsed time = 0.373 seconds 00:12:48.288 0 00:12:48.288 00:17:02 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@295 -- # killprocess 84987 00:12:48.288 00:17:02 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@946 -- # '[' -z 84987 ']' 00:12:48.288 00:17:02 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@950 -- # kill -0 84987 00:12:48.288 00:17:02 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@951 -- # uname 00:12:48.288 00:17:02 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:12:48.288 00:17:02 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 84987 00:12:48.288 00:17:02 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:12:48.288 killing process with pid 84987 00:12:48.288 00:17:02 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:12:48.288 00:17:02 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@964 -- # echo 'killing process with pid 84987' 00:12:48.288 00:17:02 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@965 -- # kill 84987 00:12:48.288 00:17:02 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@970 -- # wait 84987 00:12:48.547 00:17:03 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@296 -- # trap - SIGINT SIGTERM EXIT 00:12:48.547 00:12:48.547 real 0m1.394s 00:12:48.547 user 0m3.252s 00:12:48.547 sys 0m0.371s 00:12:48.547 00:17:03 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:48.547 00:17:03 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:12:48.547 ************************************ 00:12:48.547 END TEST bdev_bounds 00:12:48.547 ************************************ 00:12:48.547 00:17:03 blockdev_xnvme -- bdev/blockdev.sh@762 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '' 00:12:48.547 00:17:03 blockdev_xnvme -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:12:48.547 00:17:03 blockdev_xnvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:48.547 00:17:03 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:48.547 ************************************ 00:12:48.547 START TEST bdev_nbd 00:12:48.547 ************************************ 00:12:48.547 00:17:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1121 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '' 00:12:48.547 00:17:03 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@300 -- # uname -s 00:12:48.547 00:17:03 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@300 -- # [[ Linux == Linux ]] 00:12:48.547 00:17:03 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:48.547 00:17:03 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:12:48.547 00:17:03 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@304 -- # bdev_all=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:12:48.547 00:17:03 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_all 00:12:48.547 00:17:03 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@305 -- # local bdev_num=6 00:12:48.548 00:17:03 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@309 -- # [[ -e /sys/module/nbd ]] 00:12:48.548 00:17:03 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@311 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:12:48.548 00:17:03 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@311 -- # local nbd_all 00:12:48.548 00:17:03 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@312 -- # bdev_num=6 00:12:48.548 00:17:03 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:12:48.548 00:17:03 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local nbd_list 00:12:48.548 00:17:03 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@315 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:12:48.548 00:17:03 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@315 -- # local bdev_list 00:12:48.548 00:17:03 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@318 -- # nbd_pid=85039 00:12:48.548 00:17:03 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@319 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:12:48.548 00:17:03 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@317 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:12:48.548 00:17:03 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@320 -- # waitforlisten 85039 /var/tmp/spdk-nbd.sock 00:12:48.548 00:17:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@827 -- # '[' -z 85039 ']' 00:12:48.548 00:17:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:12:48.548 00:17:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@832 -- # local max_retries=100 00:12:48.548 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:12:48.548 00:17:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:12:48.548 00:17:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@836 -- # xtrace_disable 00:12:48.548 00:17:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:12:48.548 [2024-07-23 00:17:03.213435] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:12:48.548 [2024-07-23 00:17:03.213559] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:48.814 [2024-07-23 00:17:03.364309] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:48.814 [2024-07-23 00:17:03.406709] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:49.387 00:17:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:12:49.387 00:17:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@860 -- # return 0 00:12:49.387 00:17:04 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' 00:12:49.387 00:17:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:49.387 00:17:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:12:49.387 00:17:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:12:49.387 00:17:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' 00:12:49.387 00:17:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:49.387 00:17:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:12:49.387 00:17:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:12:49.387 00:17:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:12:49.387 00:17:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:12:49.387 00:17:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:12:49.387 00:17:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:12:49.387 00:17:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 00:12:49.646 00:17:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:12:49.646 00:17:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:12:49.646 00:17:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:12:49.646 00:17:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:12:49.646 00:17:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:12:49.646 00:17:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:12:49.646 00:17:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:12:49.646 00:17:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:12:49.646 00:17:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:12:49.646 00:17:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:12:49.646 00:17:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:12:49.646 00:17:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:49.646 1+0 records in 00:12:49.646 1+0 records out 00:12:49.646 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000676945 s, 6.1 MB/s 00:12:49.646 00:17:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:49.646 00:17:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:12:49.646 00:17:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:49.646 00:17:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:12:49.646 00:17:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:12:49.646 00:17:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:49.646 00:17:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:12:49.646 00:17:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n2 00:12:49.905 00:17:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:12:49.905 00:17:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:12:49.905 00:17:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:12:49.905 00:17:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:12:49.905 00:17:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:12:49.905 00:17:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:12:49.905 00:17:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:12:49.905 00:17:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:12:49.905 00:17:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:12:49.905 00:17:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:12:49.905 00:17:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:12:49.905 00:17:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:49.905 1+0 records in 00:12:49.905 1+0 records out 00:12:49.905 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000697666 s, 5.9 MB/s 00:12:49.905 00:17:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:49.905 00:17:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:12:49.905 00:17:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:49.905 00:17:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:12:49.905 00:17:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:12:49.905 00:17:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:49.905 00:17:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:12:49.905 00:17:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n3 00:12:50.164 00:17:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:12:50.164 00:17:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:12:50.164 00:17:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:12:50.164 00:17:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd2 00:12:50.164 00:17:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:12:50.164 00:17:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:12:50.164 00:17:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:12:50.164 00:17:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd2 /proc/partitions 00:12:50.164 00:17:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:12:50.164 00:17:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:12:50.164 00:17:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:12:50.164 00:17:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:50.164 1+0 records in 00:12:50.164 1+0 records out 00:12:50.164 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00225881 s, 1.8 MB/s 00:12:50.164 00:17:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:50.164 00:17:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:12:50.164 00:17:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:50.164 00:17:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:12:50.164 00:17:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:12:50.164 00:17:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:50.164 00:17:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:12:50.164 00:17:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 00:12:50.423 00:17:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:12:50.423 00:17:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:12:50.423 00:17:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:12:50.423 00:17:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd3 00:12:50.423 00:17:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:12:50.423 00:17:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:12:50.423 00:17:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:12:50.423 00:17:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd3 /proc/partitions 00:12:50.423 00:17:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:12:50.423 00:17:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:12:50.423 00:17:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:12:50.423 00:17:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:50.423 1+0 records in 00:12:50.423 1+0 records out 00:12:50.423 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000665304 s, 6.2 MB/s 00:12:50.423 00:17:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:50.423 00:17:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:12:50.423 00:17:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:50.423 00:17:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:12:50.423 00:17:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:12:50.423 00:17:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:50.423 00:17:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:12:50.423 00:17:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 00:12:50.681 00:17:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:12:50.681 00:17:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:12:50.681 00:17:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:12:50.681 00:17:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd4 00:12:50.681 00:17:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:12:50.681 00:17:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:12:50.681 00:17:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:12:50.681 00:17:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd4 /proc/partitions 00:12:50.682 00:17:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:12:50.682 00:17:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:12:50.682 00:17:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:12:50.682 00:17:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:50.682 1+0 records in 00:12:50.682 1+0 records out 00:12:50.682 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000693457 s, 5.9 MB/s 00:12:50.682 00:17:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:50.682 00:17:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:12:50.682 00:17:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:50.682 00:17:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:12:50.682 00:17:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:12:50.682 00:17:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:50.682 00:17:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:12:50.682 00:17:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 00:12:50.940 00:17:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:12:50.940 00:17:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:12:50.940 00:17:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:12:50.940 00:17:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd5 00:12:50.940 00:17:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:12:50.940 00:17:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:12:50.940 00:17:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:12:50.940 00:17:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd5 /proc/partitions 00:12:50.940 00:17:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:12:50.940 00:17:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:12:50.940 00:17:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:12:50.940 00:17:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:50.940 1+0 records in 00:12:50.940 1+0 records out 00:12:50.940 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000854588 s, 4.8 MB/s 00:12:50.940 00:17:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:50.940 00:17:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:12:50.940 00:17:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:50.940 00:17:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:12:50.940 00:17:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:12:50.940 00:17:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:50.940 00:17:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:12:50.940 00:17:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:12:50.940 00:17:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:12:50.940 { 00:12:50.940 "nbd_device": "/dev/nbd0", 00:12:50.940 "bdev_name": "nvme0n1" 00:12:50.940 }, 00:12:50.940 { 00:12:50.940 "nbd_device": "/dev/nbd1", 00:12:50.940 "bdev_name": "nvme0n2" 00:12:50.940 }, 00:12:50.940 { 00:12:50.940 "nbd_device": "/dev/nbd2", 00:12:50.940 "bdev_name": "nvme0n3" 00:12:50.940 }, 00:12:50.940 { 00:12:50.940 "nbd_device": "/dev/nbd3", 00:12:50.940 "bdev_name": "nvme1n1" 00:12:50.940 }, 00:12:50.940 { 00:12:50.940 "nbd_device": "/dev/nbd4", 00:12:50.940 "bdev_name": "nvme2n1" 00:12:50.940 }, 00:12:50.940 { 00:12:50.940 "nbd_device": "/dev/nbd5", 00:12:50.940 "bdev_name": "nvme3n1" 00:12:50.940 } 00:12:50.940 ]' 00:12:50.940 00:17:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:12:50.940 00:17:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:12:50.940 { 00:12:50.940 "nbd_device": "/dev/nbd0", 00:12:50.940 "bdev_name": "nvme0n1" 00:12:50.940 }, 00:12:50.940 { 00:12:50.940 "nbd_device": "/dev/nbd1", 00:12:50.940 "bdev_name": "nvme0n2" 00:12:50.940 }, 00:12:50.940 { 00:12:50.940 "nbd_device": "/dev/nbd2", 00:12:50.940 "bdev_name": "nvme0n3" 00:12:50.940 }, 00:12:50.940 { 00:12:50.940 "nbd_device": "/dev/nbd3", 00:12:50.940 "bdev_name": "nvme1n1" 00:12:50.940 }, 00:12:50.940 { 00:12:50.940 "nbd_device": "/dev/nbd4", 00:12:50.940 "bdev_name": "nvme2n1" 00:12:50.940 }, 00:12:50.940 { 00:12:50.940 "nbd_device": "/dev/nbd5", 00:12:50.940 "bdev_name": "nvme3n1" 00:12:50.940 } 00:12:50.940 ]' 00:12:50.940 00:17:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:12:51.199 00:17:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:12:51.199 00:17:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:51.199 00:17:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:12:51.199 00:17:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:51.199 00:17:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:12:51.199 00:17:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:51.199 00:17:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:12:51.199 00:17:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:51.199 00:17:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:51.199 00:17:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:51.199 00:17:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:51.199 00:17:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:51.199 00:17:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:51.199 00:17:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:51.199 00:17:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:51.199 00:17:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:51.199 00:17:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:12:51.457 00:17:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:51.457 00:17:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:51.457 00:17:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:51.457 00:17:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:51.457 00:17:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:51.457 00:17:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:51.457 00:17:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:51.457 00:17:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:51.457 00:17:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:51.457 00:17:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:12:51.715 00:17:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:12:51.715 00:17:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:12:51.715 00:17:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:12:51.715 00:17:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:51.715 00:17:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:51.715 00:17:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:12:51.715 00:17:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:51.715 00:17:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:51.715 00:17:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:51.715 00:17:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:12:51.972 00:17:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:12:51.972 00:17:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:12:51.972 00:17:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:12:51.972 00:17:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:51.972 00:17:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:51.972 00:17:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:12:51.972 00:17:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:51.972 00:17:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:51.972 00:17:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:51.972 00:17:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:12:51.972 00:17:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:12:51.972 00:17:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:12:51.972 00:17:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:12:51.972 00:17:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:51.972 00:17:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:51.972 00:17:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:12:51.972 00:17:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:51.972 00:17:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:51.972 00:17:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:51.972 00:17:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:12:52.229 00:17:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:12:52.229 00:17:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:12:52.229 00:17:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:12:52.229 00:17:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:52.229 00:17:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:52.229 00:17:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:12:52.229 00:17:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:52.229 00:17:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:52.229 00:17:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:12:52.229 00:17:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:52.229 00:17:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:12:52.487 00:17:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:12:52.487 00:17:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:12:52.487 00:17:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:12:52.487 00:17:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:12:52.487 00:17:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:12:52.487 00:17:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:12:52.487 00:17:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:12:52.487 00:17:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:12:52.487 00:17:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:12:52.487 00:17:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:12:52.487 00:17:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:12:52.487 00:17:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:12:52.487 00:17:07 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:12:52.487 00:17:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:52.487 00:17:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:12:52.487 00:17:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:12:52.487 00:17:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:12:52.487 00:17:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:12:52.487 00:17:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:12:52.487 00:17:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:52.487 00:17:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:12:52.487 00:17:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:52.487 00:17:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:12:52.487 00:17:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:52.487 00:17:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:12:52.487 00:17:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:52.487 00:17:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:12:52.487 00:17:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 /dev/nbd0 00:12:52.745 /dev/nbd0 00:12:52.745 00:17:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:52.745 00:17:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:52.745 00:17:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:12:52.745 00:17:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:12:52.745 00:17:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:12:52.745 00:17:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:12:52.745 00:17:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:12:52.745 00:17:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:12:52.745 00:17:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:12:52.745 00:17:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:12:52.745 00:17:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:52.745 1+0 records in 00:12:52.745 1+0 records out 00:12:52.745 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000560379 s, 7.3 MB/s 00:12:52.745 00:17:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:52.745 00:17:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:12:52.745 00:17:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:52.745 00:17:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:12:52.745 00:17:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:12:52.745 00:17:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:52.745 00:17:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:12:52.745 00:17:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n2 /dev/nbd1 00:12:53.004 /dev/nbd1 00:12:53.004 00:17:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:12:53.004 00:17:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:12:53.004 00:17:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:12:53.004 00:17:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:12:53.004 00:17:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:12:53.004 00:17:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:12:53.004 00:17:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:12:53.004 00:17:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:12:53.004 00:17:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:12:53.005 00:17:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:12:53.005 00:17:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:53.005 1+0 records in 00:12:53.005 1+0 records out 00:12:53.005 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000681675 s, 6.0 MB/s 00:12:53.005 00:17:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:53.005 00:17:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:12:53.005 00:17:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:53.005 00:17:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:12:53.005 00:17:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:12:53.005 00:17:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:53.005 00:17:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:12:53.005 00:17:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n3 /dev/nbd10 00:12:53.263 /dev/nbd10 00:12:53.263 00:17:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:12:53.263 00:17:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:12:53.263 00:17:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd10 00:12:53.263 00:17:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:12:53.263 00:17:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:12:53.263 00:17:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:12:53.263 00:17:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd10 /proc/partitions 00:12:53.263 00:17:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:12:53.263 00:17:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:12:53.263 00:17:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:12:53.263 00:17:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:53.263 1+0 records in 00:12:53.263 1+0 records out 00:12:53.263 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000557901 s, 7.3 MB/s 00:12:53.263 00:17:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:53.263 00:17:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:12:53.263 00:17:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:53.263 00:17:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:12:53.263 00:17:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:12:53.263 00:17:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:53.263 00:17:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:12:53.263 00:17:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 /dev/nbd11 00:12:53.522 /dev/nbd11 00:12:53.522 00:17:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:12:53.522 00:17:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:12:53.522 00:17:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd11 00:12:53.522 00:17:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:12:53.522 00:17:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:12:53.522 00:17:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:12:53.522 00:17:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd11 /proc/partitions 00:12:53.522 00:17:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:12:53.522 00:17:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:12:53.522 00:17:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:12:53.522 00:17:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:53.522 1+0 records in 00:12:53.522 1+0 records out 00:12:53.522 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000670875 s, 6.1 MB/s 00:12:53.522 00:17:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:53.522 00:17:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:12:53.523 00:17:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:53.523 00:17:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:12:53.523 00:17:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:12:53.523 00:17:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:53.523 00:17:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:12:53.523 00:17:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 /dev/nbd12 00:12:53.780 /dev/nbd12 00:12:53.780 00:17:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:12:53.780 00:17:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:12:53.780 00:17:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd12 00:12:53.780 00:17:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:12:53.780 00:17:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:12:53.780 00:17:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:12:53.780 00:17:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd12 /proc/partitions 00:12:53.780 00:17:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:12:53.780 00:17:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:12:53.781 00:17:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:12:53.781 00:17:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:53.781 1+0 records in 00:12:53.781 1+0 records out 00:12:53.781 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000761396 s, 5.4 MB/s 00:12:53.781 00:17:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:53.781 00:17:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:12:53.781 00:17:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:53.781 00:17:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:12:53.781 00:17:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:12:53.781 00:17:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:53.781 00:17:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:12:53.781 00:17:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 /dev/nbd13 00:12:53.781 /dev/nbd13 00:12:53.781 00:17:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:12:54.039 00:17:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:12:54.039 00:17:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd13 00:12:54.039 00:17:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:12:54.039 00:17:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:12:54.039 00:17:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:12:54.039 00:17:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd13 /proc/partitions 00:12:54.039 00:17:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:12:54.039 00:17:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:12:54.039 00:17:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:12:54.039 00:17:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:54.039 1+0 records in 00:12:54.039 1+0 records out 00:12:54.039 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000652517 s, 6.3 MB/s 00:12:54.039 00:17:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:54.039 00:17:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:12:54.039 00:17:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:54.039 00:17:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:12:54.039 00:17:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:12:54.039 00:17:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:54.039 00:17:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:12:54.039 00:17:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:12:54.039 00:17:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:54.039 00:17:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:12:54.039 00:17:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:12:54.039 { 00:12:54.039 "nbd_device": "/dev/nbd0", 00:12:54.039 "bdev_name": "nvme0n1" 00:12:54.039 }, 00:12:54.039 { 00:12:54.039 "nbd_device": "/dev/nbd1", 00:12:54.039 "bdev_name": "nvme0n2" 00:12:54.039 }, 00:12:54.039 { 00:12:54.039 "nbd_device": "/dev/nbd10", 00:12:54.039 "bdev_name": "nvme0n3" 00:12:54.039 }, 00:12:54.039 { 00:12:54.039 "nbd_device": "/dev/nbd11", 00:12:54.039 "bdev_name": "nvme1n1" 00:12:54.039 }, 00:12:54.039 { 00:12:54.039 "nbd_device": "/dev/nbd12", 00:12:54.039 "bdev_name": "nvme2n1" 00:12:54.039 }, 00:12:54.039 { 00:12:54.039 "nbd_device": "/dev/nbd13", 00:12:54.039 "bdev_name": "nvme3n1" 00:12:54.039 } 00:12:54.039 ]' 00:12:54.039 00:17:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:12:54.039 { 00:12:54.039 "nbd_device": "/dev/nbd0", 00:12:54.039 "bdev_name": "nvme0n1" 00:12:54.039 }, 00:12:54.039 { 00:12:54.039 "nbd_device": "/dev/nbd1", 00:12:54.039 "bdev_name": "nvme0n2" 00:12:54.039 }, 00:12:54.039 { 00:12:54.039 "nbd_device": "/dev/nbd10", 00:12:54.039 "bdev_name": "nvme0n3" 00:12:54.039 }, 00:12:54.039 { 00:12:54.039 "nbd_device": "/dev/nbd11", 00:12:54.039 "bdev_name": "nvme1n1" 00:12:54.039 }, 00:12:54.039 { 00:12:54.039 "nbd_device": "/dev/nbd12", 00:12:54.039 "bdev_name": "nvme2n1" 00:12:54.039 }, 00:12:54.039 { 00:12:54.039 "nbd_device": "/dev/nbd13", 00:12:54.039 "bdev_name": "nvme3n1" 00:12:54.039 } 00:12:54.039 ]' 00:12:54.039 00:17:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:12:54.297 00:17:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:12:54.297 /dev/nbd1 00:12:54.297 /dev/nbd10 00:12:54.297 /dev/nbd11 00:12:54.297 /dev/nbd12 00:12:54.297 /dev/nbd13' 00:12:54.297 00:17:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:12:54.297 /dev/nbd1 00:12:54.297 /dev/nbd10 00:12:54.297 /dev/nbd11 00:12:54.297 /dev/nbd12 00:12:54.297 /dev/nbd13' 00:12:54.297 00:17:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:12:54.297 00:17:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:12:54.297 00:17:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:12:54.297 00:17:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:12:54.297 00:17:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:12:54.297 00:17:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:12:54.297 00:17:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:12:54.297 00:17:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:12:54.297 00:17:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:12:54.298 00:17:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:12:54.298 00:17:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:12:54.298 00:17:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:12:54.298 256+0 records in 00:12:54.298 256+0 records out 00:12:54.298 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0122047 s, 85.9 MB/s 00:12:54.298 00:17:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:54.298 00:17:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:12:54.298 256+0 records in 00:12:54.298 256+0 records out 00:12:54.298 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.114236 s, 9.2 MB/s 00:12:54.298 00:17:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:54.298 00:17:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:12:54.555 256+0 records in 00:12:54.555 256+0 records out 00:12:54.555 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.122956 s, 8.5 MB/s 00:12:54.555 00:17:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:54.555 00:17:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:12:54.555 256+0 records in 00:12:54.555 256+0 records out 00:12:54.555 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.120737 s, 8.7 MB/s 00:12:54.555 00:17:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:54.555 00:17:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:12:54.813 256+0 records in 00:12:54.813 256+0 records out 00:12:54.813 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.119293 s, 8.8 MB/s 00:12:54.813 00:17:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:54.813 00:17:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:12:54.813 256+0 records in 00:12:54.813 256+0 records out 00:12:54.813 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.139452 s, 7.5 MB/s 00:12:54.813 00:17:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:54.813 00:17:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:12:55.071 256+0 records in 00:12:55.071 256+0 records out 00:12:55.071 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.124844 s, 8.4 MB/s 00:12:55.071 00:17:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:12:55.071 00:17:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:12:55.071 00:17:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:12:55.071 00:17:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:12:55.071 00:17:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:12:55.071 00:17:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:12:55.071 00:17:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:12:55.071 00:17:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:55.071 00:17:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:12:55.071 00:17:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:55.071 00:17:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:12:55.071 00:17:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:55.071 00:17:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:12:55.071 00:17:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:55.071 00:17:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:12:55.071 00:17:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:55.071 00:17:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:12:55.071 00:17:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:55.071 00:17:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:12:55.071 00:17:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:12:55.071 00:17:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:12:55.071 00:17:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:55.071 00:17:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:12:55.071 00:17:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:55.071 00:17:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:12:55.071 00:17:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:55.071 00:17:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:12:55.330 00:17:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:55.330 00:17:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:55.330 00:17:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:55.330 00:17:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:55.330 00:17:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:55.330 00:17:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:55.330 00:17:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:55.330 00:17:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:55.330 00:17:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:55.330 00:17:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:12:55.330 00:17:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:55.589 00:17:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:55.589 00:17:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:55.589 00:17:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:55.589 00:17:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:55.589 00:17:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:55.589 00:17:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:55.589 00:17:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:55.589 00:17:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:55.589 00:17:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:12:55.589 00:17:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:12:55.589 00:17:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:12:55.589 00:17:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:12:55.589 00:17:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:55.589 00:17:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:55.589 00:17:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:12:55.589 00:17:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:55.589 00:17:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:55.589 00:17:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:55.589 00:17:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:12:55.847 00:17:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:12:55.847 00:17:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:12:55.847 00:17:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:12:55.847 00:17:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:55.847 00:17:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:55.847 00:17:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:12:55.847 00:17:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:55.847 00:17:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:55.847 00:17:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:55.847 00:17:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:12:56.105 00:17:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:12:56.105 00:17:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:12:56.105 00:17:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:12:56.105 00:17:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:56.105 00:17:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:56.105 00:17:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:12:56.105 00:17:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:56.105 00:17:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:56.105 00:17:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:56.105 00:17:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:12:56.363 00:17:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:12:56.363 00:17:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:12:56.363 00:17:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:12:56.363 00:17:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:56.363 00:17:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:56.363 00:17:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:12:56.363 00:17:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:56.363 00:17:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:56.363 00:17:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:12:56.363 00:17:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:56.363 00:17:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:12:56.363 00:17:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:12:56.621 00:17:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:12:56.621 00:17:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:12:56.621 00:17:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:12:56.621 00:17:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:12:56.621 00:17:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:12:56.621 00:17:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:12:56.621 00:17:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:12:56.621 00:17:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:12:56.621 00:17:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:12:56.621 00:17:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:12:56.621 00:17:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:12:56.621 00:17:11 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@324 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:12:56.621 00:17:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:56.621 00:17:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:12:56.621 00:17:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd_list 00:12:56.621 00:17:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:12:56.621 00:17:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:12:56.621 malloc_lvol_verify 00:12:56.621 00:17:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:12:56.879 4b9584bd-0ff5-4026-809e-72cf35525bd6 00:12:56.879 00:17:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:12:57.137 f313bb9c-8b0e-4b4a-bded-764e263796af 00:12:57.137 00:17:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:12:57.137 /dev/nbd0 00:12:57.137 00:17:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:12:57.137 mke2fs 1.46.5 (30-Dec-2021) 00:12:57.137 Discarding device blocks: 0/4096 done 00:12:57.137 Creating filesystem with 4096 1k blocks and 1024 inodes 00:12:57.137 00:12:57.137 Allocating group tables: 0/1 done 00:12:57.137 Writing inode tables: 0/1 done 00:12:57.137 Creating journal (1024 blocks): done 00:12:57.137 Writing superblocks and filesystem accounting information: 0/1 done 00:12:57.137 00:12:57.395 00:17:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:12:57.395 00:17:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:12:57.395 00:17:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:57.395 00:17:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:12:57.395 00:17:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:57.395 00:17:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:12:57.395 00:17:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:57.395 00:17:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:12:57.395 00:17:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:57.395 00:17:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:57.395 00:17:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:57.395 00:17:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:57.395 00:17:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:57.395 00:17:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:57.395 00:17:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:57.395 00:17:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:57.395 00:17:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:12:57.395 00:17:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@147 -- # return 0 00:12:57.395 00:17:12 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@326 -- # killprocess 85039 00:12:57.395 00:17:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@946 -- # '[' -z 85039 ']' 00:12:57.395 00:17:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@950 -- # kill -0 85039 00:12:57.395 00:17:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@951 -- # uname 00:12:57.395 00:17:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:12:57.395 00:17:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 85039 00:12:57.395 killing process with pid 85039 00:12:57.395 00:17:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:12:57.395 00:17:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:12:57.395 00:17:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@964 -- # echo 'killing process with pid 85039' 00:12:57.395 00:17:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@965 -- # kill 85039 00:12:57.395 00:17:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@970 -- # wait 85039 00:12:57.653 00:17:12 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@327 -- # trap - SIGINT SIGTERM EXIT 00:12:57.653 00:12:57.653 real 0m9.177s 00:12:57.653 user 0m11.956s 00:12:57.653 sys 0m4.310s 00:12:57.653 00:17:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:57.653 00:17:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:12:57.653 ************************************ 00:12:57.653 END TEST bdev_nbd 00:12:57.653 ************************************ 00:12:57.912 00:17:12 blockdev_xnvme -- bdev/blockdev.sh@763 -- # [[ y == y ]] 00:12:57.912 00:17:12 blockdev_xnvme -- bdev/blockdev.sh@764 -- # '[' xnvme = nvme ']' 00:12:57.912 00:17:12 blockdev_xnvme -- bdev/blockdev.sh@764 -- # '[' xnvme = gpt ']' 00:12:57.912 00:17:12 blockdev_xnvme -- bdev/blockdev.sh@768 -- # run_test bdev_fio fio_test_suite '' 00:12:57.912 00:17:12 blockdev_xnvme -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:12:57.912 00:17:12 blockdev_xnvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:57.912 00:17:12 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:57.912 ************************************ 00:12:57.912 START TEST bdev_fio 00:12:57.912 ************************************ 00:12:57.912 00:17:12 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1121 -- # fio_test_suite '' 00:12:57.912 00:17:12 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@331 -- # local env_context 00:12:57.912 00:17:12 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@335 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:12:57.912 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:12:57.912 00:17:12 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@336 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:12:57.912 00:17:12 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@339 -- # echo '' 00:12:57.912 00:17:12 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@339 -- # sed s/--env-context=// 00:12:57.912 00:17:12 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@339 -- # env_context= 00:12:57.912 00:17:12 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:12:57.912 00:17:12 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1276 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:12:57.912 00:17:12 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1277 -- # local workload=verify 00:12:57.912 00:17:12 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1278 -- # local bdev_type=AIO 00:12:57.912 00:17:12 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1279 -- # local env_context= 00:12:57.912 00:17:12 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1280 -- # local fio_dir=/usr/src/fio 00:12:57.912 00:17:12 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1282 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:12:57.912 00:17:12 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1287 -- # '[' -z verify ']' 00:12:57.912 00:17:12 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -n '' ']' 00:12:57.912 00:17:12 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:12:57.912 00:17:12 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1297 -- # cat 00:12:57.912 00:17:12 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1309 -- # '[' verify == verify ']' 00:12:57.912 00:17:12 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1310 -- # cat 00:12:57.912 00:17:12 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1319 -- # '[' AIO == AIO ']' 00:12:57.912 00:17:12 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1320 -- # /usr/src/fio/fio --version 00:12:57.912 00:17:12 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1320 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:12:57.912 00:17:12 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1321 -- # echo serialize_overlap=1 00:12:57.912 00:17:12 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:12:57.912 00:17:12 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_nvme0n1]' 00:12:57.912 00:17:12 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=nvme0n1 00:12:57.912 00:17:12 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:12:57.912 00:17:12 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_nvme0n2]' 00:12:57.912 00:17:12 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=nvme0n2 00:12:57.912 00:17:12 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:12:57.912 00:17:12 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_nvme0n3]' 00:12:57.912 00:17:12 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=nvme0n3 00:12:57.912 00:17:12 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:12:57.912 00:17:12 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_nvme1n1]' 00:12:57.912 00:17:12 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=nvme1n1 00:12:57.912 00:17:12 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:12:57.912 00:17:12 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_nvme2n1]' 00:12:57.912 00:17:12 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=nvme2n1 00:12:57.912 00:17:12 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:12:57.912 00:17:12 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_nvme3n1]' 00:12:57.912 00:17:12 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=nvme3n1 00:12:57.912 00:17:12 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@347 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:12:57.912 00:17:12 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@349 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:12:57.912 00:17:12 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1097 -- # '[' 11 -le 1 ']' 00:12:57.912 00:17:12 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:57.912 00:17:12 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:12:57.912 ************************************ 00:12:57.912 START TEST bdev_fio_rw_verify 00:12:57.912 ************************************ 00:12:57.912 00:17:12 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1121 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:12:57.912 00:17:12 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:12:57.912 00:17:12 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:12:57.912 00:17:12 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:12:57.912 00:17:12 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1335 -- # local sanitizers 00:12:57.912 00:17:12 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1336 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:12:57.912 00:17:12 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1337 -- # shift 00:12:57.912 00:17:12 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # local asan_lib= 00:12:57.912 00:17:12 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:12:57.912 00:17:12 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:12:57.912 00:17:12 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # grep libasan 00:12:57.912 00:17:12 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:12:57.912 00:17:12 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # asan_lib=/usr/lib64/libasan.so.8 00:12:57.912 00:17:12 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1342 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:12:57.912 00:17:12 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # break 00:12:57.912 00:17:12 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1348 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:12:57.912 00:17:12 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:12:58.170 job_nvme0n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:58.170 job_nvme0n2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:58.170 job_nvme0n3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:58.170 job_nvme1n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:58.170 job_nvme2n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:58.170 job_nvme3n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:58.170 fio-3.35 00:12:58.170 Starting 6 threads 00:13:10.369 00:13:10.369 job_nvme0n1: (groupid=0, jobs=6): err= 0: pid=85430: Tue Jul 23 00:17:23 2024 00:13:10.369 read: IOPS=33.3k, BW=130MiB/s (136MB/s)(1301MiB/10001msec) 00:13:10.369 slat (usec): min=2, max=516, avg= 6.46, stdev= 3.73 00:13:10.369 clat (usec): min=87, max=4144, avg=589.64, stdev=164.37 00:13:10.369 lat (usec): min=92, max=4157, avg=596.10, stdev=165.04 00:13:10.369 clat percentiles (usec): 00:13:10.369 | 50.000th=[ 619], 99.000th=[ 988], 99.900th=[ 1614], 99.990th=[ 2704], 00:13:10.369 | 99.999th=[ 4113] 00:13:10.369 write: IOPS=33.6k, BW=131MiB/s (138MB/s)(1314MiB/10001msec); 0 zone resets 00:13:10.369 slat (usec): min=10, max=1799, avg=18.70, stdev=18.69 00:13:10.369 clat (usec): min=82, max=6168, avg=647.65, stdev=174.01 00:13:10.369 lat (usec): min=103, max=6191, avg=666.34, stdev=174.95 00:13:10.369 clat percentiles (usec): 00:13:10.369 | 50.000th=[ 660], 99.000th=[ 1156], 99.900th=[ 1975], 99.990th=[ 3392], 00:13:10.369 | 99.999th=[ 6128] 00:13:10.369 bw ( KiB/s): min=113088, max=147665, per=99.94%, avg=134495.68, stdev=2029.57, samples=114 00:13:10.369 iops : min=28272, max=36916, avg=33623.79, stdev=507.37, samples=114 00:13:10.369 lat (usec) : 100=0.01%, 250=3.20%, 500=14.57%, 750=69.60%, 1000=11.09% 00:13:10.369 lat (msec) : 2=1.47%, 4=0.07%, 10=0.01% 00:13:10.369 cpu : usr=62.62%, sys=27.35%, ctx=7270, majf=0, minf=29455 00:13:10.369 IO depths : 1=12.2%, 2=24.7%, 4=50.3%, 8=12.8%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:10.369 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:10.369 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:10.369 issued rwts: total=333171,336469,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:10.369 latency : target=0, window=0, percentile=100.00%, depth=8 00:13:10.369 00:13:10.369 Run status group 0 (all jobs): 00:13:10.369 READ: bw=130MiB/s (136MB/s), 130MiB/s-130MiB/s (136MB/s-136MB/s), io=1301MiB (1365MB), run=10001-10001msec 00:13:10.369 WRITE: bw=131MiB/s (138MB/s), 131MiB/s-131MiB/s (138MB/s-138MB/s), io=1314MiB (1378MB), run=10001-10001msec 00:13:10.369 ----------------------------------------------------- 00:13:10.369 Suppressions used: 00:13:10.369 count bytes template 00:13:10.369 6 48 /usr/src/fio/parse.c 00:13:10.369 3072 294912 /usr/src/fio/iolog.c 00:13:10.369 1 8 libtcmalloc_minimal.so 00:13:10.369 1 904 libcrypto.so 00:13:10.369 ----------------------------------------------------- 00:13:10.369 00:13:10.369 00:13:10.369 real 0m11.236s 00:13:10.369 user 0m38.389s 00:13:10.369 sys 0m16.763s 00:13:10.369 ************************************ 00:13:10.369 END TEST bdev_fio_rw_verify 00:13:10.369 00:17:23 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:10.369 00:17:23 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:13:10.369 ************************************ 00:13:10.369 00:17:23 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f 00:13:10.369 00:17:23 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@351 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:10.369 00:17:23 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:13:10.369 00:17:23 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1276 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:10.369 00:17:23 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1277 -- # local workload=trim 00:13:10.369 00:17:23 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1278 -- # local bdev_type= 00:13:10.369 00:17:23 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1279 -- # local env_context= 00:13:10.369 00:17:23 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1280 -- # local fio_dir=/usr/src/fio 00:13:10.369 00:17:23 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1282 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:13:10.369 00:17:23 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1287 -- # '[' -z trim ']' 00:13:10.369 00:17:23 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -n '' ']' 00:13:10.369 00:17:23 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:10.369 00:17:23 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1297 -- # cat 00:13:10.369 00:17:23 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1309 -- # '[' trim == verify ']' 00:13:10.369 00:17:23 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1324 -- # '[' trim == trim ']' 00:13:10.369 00:17:23 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1325 -- # echo rw=trimwrite 00:13:10.369 00:17:23 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@355 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:13:10.370 00:17:23 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@355 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "43dfa4b9-cb63-44b7-992c-3e9a52fad3ab"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "43dfa4b9-cb63-44b7-992c-3e9a52fad3ab",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n2",' ' "aliases": [' ' "ee0b8adf-5aaf-4c66-ac07-7ec78073196b"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "ee0b8adf-5aaf-4c66-ac07-7ec78073196b",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n3",' ' "aliases": [' ' "637500fc-927e-46c3-8cb1-3bce0ca54f53"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "637500fc-927e-46c3-8cb1-3bce0ca54f53",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "92e7fc2f-7fb4-445a-9ec3-7576456ec4f5"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "92e7fc2f-7fb4-445a-9ec3-7576456ec4f5",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "6d31fd3c-c164-46d8-a7e3-ccfb32429015"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "6d31fd3c-c164-46d8-a7e3-ccfb32429015",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "665652b4-39e3-4911-a6e0-29034bc2eb44"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "665652b4-39e3-4911-a6e0-29034bc2eb44",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' 00:13:10.370 00:17:23 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@355 -- # [[ -n '' ]] 00:13:10.370 00:17:23 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@361 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:10.370 00:17:23 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@362 -- # popd 00:13:10.370 /home/vagrant/spdk_repo/spdk 00:13:10.370 00:17:23 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@363 -- # trap - SIGINT SIGTERM EXIT 00:13:10.370 00:17:23 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@364 -- # return 0 00:13:10.370 00:13:10.370 real 0m11.450s 00:13:10.370 user 0m38.484s 00:13:10.370 sys 0m16.885s 00:13:10.370 00:17:23 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:10.370 ************************************ 00:13:10.370 00:17:23 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:13:10.370 END TEST bdev_fio 00:13:10.370 ************************************ 00:13:10.370 00:17:23 blockdev_xnvme -- bdev/blockdev.sh@775 -- # trap cleanup SIGINT SIGTERM EXIT 00:13:10.370 00:17:23 blockdev_xnvme -- bdev/blockdev.sh@777 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:13:10.370 00:17:23 blockdev_xnvme -- common/autotest_common.sh@1097 -- # '[' 16 -le 1 ']' 00:13:10.370 00:17:23 blockdev_xnvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:10.370 00:17:23 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:10.370 ************************************ 00:13:10.370 START TEST bdev_verify 00:13:10.370 ************************************ 00:13:10.370 00:17:23 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:13:10.370 [2024-07-23 00:17:23.982361] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:13:10.370 [2024-07-23 00:17:23.983302] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85594 ] 00:13:10.370 [2024-07-23 00:17:24.138892] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:10.370 [2024-07-23 00:17:24.182222] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:10.370 [2024-07-23 00:17:24.182449] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:10.370 Running I/O for 5 seconds... 00:13:15.635 00:13:15.635 Latency(us) 00:13:15.635 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:15.635 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:15.635 Verification LBA range: start 0x0 length 0x80000 00:13:15.635 nvme0n1 : 5.07 1893.38 7.40 0.00 0.00 67497.90 9580.36 66115.03 00:13:15.635 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:15.635 Verification LBA range: start 0x80000 length 0x80000 00:13:15.635 nvme0n1 : 5.04 1980.41 7.74 0.00 0.00 64531.98 9211.89 66957.26 00:13:15.635 Job: nvme0n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:15.635 Verification LBA range: start 0x0 length 0x80000 00:13:15.635 nvme0n2 : 5.04 1880.20 7.34 0.00 0.00 67861.87 12107.05 72431.76 00:13:15.635 Job: nvme0n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:15.635 Verification LBA range: start 0x80000 length 0x80000 00:13:15.635 nvme0n2 : 5.04 1979.92 7.73 0.00 0.00 64447.52 11633.30 66957.26 00:13:15.635 Job: nvme0n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:15.635 Verification LBA range: start 0x0 length 0x80000 00:13:15.635 nvme0n3 : 5.04 1904.66 7.44 0.00 0.00 66881.23 9001.33 78748.48 00:13:15.635 Job: nvme0n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:15.635 Verification LBA range: start 0x80000 length 0x80000 00:13:15.635 nvme0n3 : 5.07 2021.20 7.90 0.00 0.00 63024.33 6843.12 62325.00 00:13:15.635 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:15.635 Verification LBA range: start 0x0 length 0x20000 00:13:15.635 nvme1n1 : 5.08 1966.05 7.68 0.00 0.00 64699.98 9896.20 62746.11 00:13:15.635 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:15.635 Verification LBA range: start 0x20000 length 0x20000 00:13:15.635 nvme1n1 : 5.06 2025.18 7.91 0.00 0.00 62814.81 7053.67 62325.00 00:13:15.635 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:15.635 Verification LBA range: start 0x0 length 0xbd0bd 00:13:15.635 nvme2n1 : 5.07 2531.41 9.89 0.00 0.00 50103.16 5184.98 136441.21 00:13:15.635 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:15.635 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:13:15.635 nvme2n1 : 5.07 2437.25 9.52 0.00 0.00 51944.75 4658.58 155812.50 00:13:15.635 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:15.635 Verification LBA range: start 0x0 length 0xa0000 00:13:15.635 nvme3n1 : 5.08 1916.47 7.49 0.00 0.00 66014.72 6316.72 66115.03 00:13:15.635 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:15.635 Verification LBA range: start 0xa0000 length 0xa0000 00:13:15.635 nvme3n1 : 5.06 1996.85 7.80 0.00 0.00 63394.46 7106.31 69905.07 00:13:15.635 =================================================================================================================== 00:13:15.635 Total : 24532.98 95.83 0.00 0.00 62215.70 4658.58 155812.50 00:13:15.635 00:13:15.635 real 0m5.855s 00:13:15.635 user 0m8.380s 00:13:15.635 sys 0m2.257s 00:13:15.635 00:17:29 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:15.635 00:17:29 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:13:15.635 ************************************ 00:13:15.635 END TEST bdev_verify 00:13:15.635 ************************************ 00:13:15.635 00:17:29 blockdev_xnvme -- bdev/blockdev.sh@778 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:13:15.635 00:17:29 blockdev_xnvme -- common/autotest_common.sh@1097 -- # '[' 16 -le 1 ']' 00:13:15.635 00:17:29 blockdev_xnvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:15.635 00:17:29 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:15.635 ************************************ 00:13:15.635 START TEST bdev_verify_big_io 00:13:15.635 ************************************ 00:13:15.635 00:17:29 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:13:15.635 [2024-07-23 00:17:29.903536] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:13:15.635 [2024-07-23 00:17:29.903699] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85682 ] 00:13:15.635 [2024-07-23 00:17:30.058130] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:15.635 [2024-07-23 00:17:30.108371] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:15.635 [2024-07-23 00:17:30.108477] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:15.893 Running I/O for 5 seconds... 00:13:22.460 00:13:22.460 Latency(us) 00:13:22.460 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:22.460 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:22.460 Verification LBA range: start 0x0 length 0x8000 00:13:22.460 nvme0n1 : 5.68 143.72 8.98 0.00 0.00 859426.86 22213.81 1233024.31 00:13:22.460 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:22.460 Verification LBA range: start 0x8000 length 0x8000 00:13:22.460 nvme0n1 : 5.62 164.98 10.31 0.00 0.00 755039.43 119596.62 1010675.66 00:13:22.460 Job: nvme0n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:22.460 Verification LBA range: start 0x0 length 0x8000 00:13:22.460 nvme0n2 : 5.68 154.95 9.68 0.00 0.00 776885.85 37479.22 997199.99 00:13:22.460 Job: nvme0n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:22.460 Verification LBA range: start 0x8000 length 0x8000 00:13:22.460 nvme0n2 : 5.63 181.88 11.37 0.00 0.00 668297.51 4948.10 720948.64 00:13:22.460 Job: nvme0n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:22.460 Verification LBA range: start 0x0 length 0x8000 00:13:22.460 nvme0n3 : 5.68 140.82 8.80 0.00 0.00 832671.33 52849.91 1300402.69 00:13:22.460 Job: nvme0n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:22.460 Verification LBA range: start 0x8000 length 0x8000 00:13:22.460 nvme0n3 : 5.63 178.98 11.19 0.00 0.00 662627.99 48217.65 549133.78 00:13:22.460 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:22.460 Verification LBA range: start 0x0 length 0x2000 00:13:22.460 nvme1n1 : 5.74 142.68 8.92 0.00 0.00 809432.64 68220.61 1084791.88 00:13:22.460 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:22.460 Verification LBA range: start 0x2000 length 0x2000 00:13:22.460 nvme1n1 : 5.76 102.81 6.43 0.00 0.00 1121965.93 75800.67 2250437.81 00:13:22.460 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:22.460 Verification LBA range: start 0x0 length 0xbd0b 00:13:22.460 nvme2n1 : 5.76 211.11 13.19 0.00 0.00 535099.15 15581.25 693997.29 00:13:22.460 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:22.460 Verification LBA range: start 0xbd0b length 0xbd0b 00:13:22.460 nvme2n1 : 5.75 180.86 11.30 0.00 0.00 627033.29 19687.12 990462.15 00:13:22.460 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:22.460 Verification LBA range: start 0x0 length 0xa000 00:13:22.460 nvme3n1 : 5.77 194.21 12.14 0.00 0.00 569631.91 2710.93 616512.15 00:13:22.460 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:22.460 Verification LBA range: start 0xa000 length 0xa000 00:13:22.460 nvme3n1 : 5.76 208.33 13.02 0.00 0.00 535696.59 1684.46 720948.64 00:13:22.460 =================================================================================================================== 00:13:22.460 Total : 2005.33 125.33 0.00 0.00 700408.91 1684.46 2250437.81 00:13:22.460 00:13:22.460 real 0m6.565s 00:13:22.460 user 0m11.768s 00:13:22.460 sys 0m0.648s 00:13:22.460 00:17:36 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:22.460 00:17:36 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:13:22.460 ************************************ 00:13:22.460 END TEST bdev_verify_big_io 00:13:22.460 ************************************ 00:13:22.460 00:17:36 blockdev_xnvme -- bdev/blockdev.sh@779 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:22.460 00:17:36 blockdev_xnvme -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:13:22.460 00:17:36 blockdev_xnvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:22.460 00:17:36 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:22.460 ************************************ 00:13:22.460 START TEST bdev_write_zeroes 00:13:22.460 ************************************ 00:13:22.460 00:17:36 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:22.460 [2024-07-23 00:17:36.526881] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:13:22.460 [2024-07-23 00:17:36.527022] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85775 ] 00:13:22.460 [2024-07-23 00:17:36.666535] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:22.460 [2024-07-23 00:17:36.708197] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:22.460 Running I/O for 1 seconds... 00:13:23.398 00:13:23.398 Latency(us) 00:13:23.398 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:23.398 Job: nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:23.398 nvme0n1 : 1.02 8547.81 33.39 0.00 0.00 14961.66 6027.21 27161.91 00:13:23.398 Job: nvme0n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:23.398 nvme0n2 : 1.02 8532.37 33.33 0.00 0.00 14979.31 6053.53 27161.91 00:13:23.398 Job: nvme0n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:23.398 nvme0n3 : 1.01 8595.32 33.58 0.00 0.00 14857.12 7053.67 27161.91 00:13:23.398 Job: nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:23.398 nvme1n1 : 1.01 8580.56 33.52 0.00 0.00 14875.89 6895.76 27161.91 00:13:23.398 Job: nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:23.398 nvme2n1 : 1.02 11400.74 44.53 0.00 0.00 11156.67 5553.45 19897.68 00:13:23.398 Job: nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:23.398 nvme3n1 : 1.02 8562.71 33.45 0.00 0.00 14791.32 5737.69 27583.02 00:13:23.398 =================================================================================================================== 00:13:23.398 Total : 54219.51 211.79 0.00 0.00 14104.17 5553.45 27583.02 00:13:23.658 00:13:23.658 real 0m1.727s 00:13:23.658 user 0m0.982s 00:13:23.658 sys 0m0.555s 00:13:23.658 00:17:38 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:23.658 00:17:38 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:13:23.658 ************************************ 00:13:23.658 END TEST bdev_write_zeroes 00:13:23.658 ************************************ 00:13:23.658 00:17:38 blockdev_xnvme -- bdev/blockdev.sh@782 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:23.658 00:17:38 blockdev_xnvme -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:13:23.658 00:17:38 blockdev_xnvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:23.658 00:17:38 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:23.658 ************************************ 00:13:23.658 START TEST bdev_json_nonenclosed 00:13:23.658 ************************************ 00:13:23.658 00:17:38 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:23.658 [2024-07-23 00:17:38.323197] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:13:23.658 [2024-07-23 00:17:38.323364] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85817 ] 00:13:23.917 [2024-07-23 00:17:38.467578] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:23.917 [2024-07-23 00:17:38.511024] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:23.917 [2024-07-23 00:17:38.511130] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:13:23.917 [2024-07-23 00:17:38.511156] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:13:23.917 [2024-07-23 00:17:38.511168] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:13:24.177 00:13:24.177 real 0m0.380s 00:13:24.177 user 0m0.163s 00:13:24.177 sys 0m0.113s 00:13:24.177 00:17:38 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:24.177 00:17:38 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:13:24.177 ************************************ 00:13:24.177 END TEST bdev_json_nonenclosed 00:13:24.177 ************************************ 00:13:24.177 00:17:38 blockdev_xnvme -- bdev/blockdev.sh@785 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:24.177 00:17:38 blockdev_xnvme -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:13:24.177 00:17:38 blockdev_xnvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:24.177 00:17:38 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:24.177 ************************************ 00:13:24.177 START TEST bdev_json_nonarray 00:13:24.177 ************************************ 00:13:24.177 00:17:38 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:24.177 [2024-07-23 00:17:38.778324] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:13:24.177 [2024-07-23 00:17:38.778456] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85837 ] 00:13:24.437 [2024-07-23 00:17:38.930283] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:24.437 [2024-07-23 00:17:38.971833] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:24.437 [2024-07-23 00:17:38.971950] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:13:24.437 [2024-07-23 00:17:38.971982] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:13:24.437 [2024-07-23 00:17:38.971995] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:13:24.437 00:13:24.437 real 0m0.386s 00:13:24.437 user 0m0.153s 00:13:24.437 sys 0m0.130s 00:13:24.437 00:17:39 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:24.437 00:17:39 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:13:24.437 ************************************ 00:13:24.437 END TEST bdev_json_nonarray 00:13:24.437 ************************************ 00:13:24.696 00:17:39 blockdev_xnvme -- bdev/blockdev.sh@787 -- # [[ xnvme == bdev ]] 00:13:24.696 00:17:39 blockdev_xnvme -- bdev/blockdev.sh@794 -- # [[ xnvme == gpt ]] 00:13:24.696 00:17:39 blockdev_xnvme -- bdev/blockdev.sh@798 -- # [[ xnvme == crypto_sw ]] 00:13:24.696 00:17:39 blockdev_xnvme -- bdev/blockdev.sh@810 -- # trap - SIGINT SIGTERM EXIT 00:13:24.696 00:17:39 blockdev_xnvme -- bdev/blockdev.sh@811 -- # cleanup 00:13:24.696 00:17:39 blockdev_xnvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:13:24.696 00:17:39 blockdev_xnvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:13:24.696 00:17:39 blockdev_xnvme -- bdev/blockdev.sh@26 -- # [[ xnvme == rbd ]] 00:13:24.696 00:17:39 blockdev_xnvme -- bdev/blockdev.sh@30 -- # [[ xnvme == daos ]] 00:13:24.696 00:17:39 blockdev_xnvme -- bdev/blockdev.sh@34 -- # [[ xnvme = \g\p\t ]] 00:13:24.696 00:17:39 blockdev_xnvme -- bdev/blockdev.sh@40 -- # [[ xnvme == xnvme ]] 00:13:24.696 00:17:39 blockdev_xnvme -- bdev/blockdev.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:13:25.264 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:13:33.379 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:13:33.379 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:13:33.379 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:13:33.379 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:13:33.379 00:13:33.379 real 0m54.516s 00:13:33.379 user 1m24.173s 00:13:33.379 sys 0m42.425s 00:13:33.379 00:17:47 blockdev_xnvme -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:33.379 ************************************ 00:13:33.379 END TEST blockdev_xnvme 00:13:33.379 00:17:47 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:33.379 ************************************ 00:13:33.379 00:17:47 -- spdk/autotest.sh@251 -- # run_test ublk /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:13:33.379 00:17:47 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:13:33.379 00:17:47 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:33.379 00:17:47 -- common/autotest_common.sh@10 -- # set +x 00:13:33.379 ************************************ 00:13:33.379 START TEST ublk 00:13:33.379 ************************************ 00:13:33.379 00:17:47 ublk -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:13:33.379 * Looking for test storage... 00:13:33.379 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:13:33.379 00:17:47 ublk -- ublk/ublk.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:13:33.379 00:17:47 ublk -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:13:33.379 00:17:47 ublk -- lvol/common.sh@7 -- # MALLOC_BS=512 00:13:33.380 00:17:47 ublk -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:13:33.380 00:17:47 ublk -- lvol/common.sh@9 -- # AIO_BS=4096 00:13:33.380 00:17:47 ublk -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:13:33.380 00:17:47 ublk -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:13:33.380 00:17:47 ublk -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:13:33.380 00:17:47 ublk -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:13:33.380 00:17:47 ublk -- ublk/ublk.sh@11 -- # [[ -z '' ]] 00:13:33.380 00:17:47 ublk -- ublk/ublk.sh@12 -- # NUM_DEVS=4 00:13:33.380 00:17:47 ublk -- ublk/ublk.sh@13 -- # NUM_QUEUE=4 00:13:33.380 00:17:47 ublk -- ublk/ublk.sh@14 -- # QUEUE_DEPTH=512 00:13:33.380 00:17:47 ublk -- ublk/ublk.sh@15 -- # MALLOC_SIZE_MB=128 00:13:33.380 00:17:47 ublk -- ublk/ublk.sh@17 -- # STOP_DISKS=1 00:13:33.380 00:17:47 ublk -- ublk/ublk.sh@27 -- # MALLOC_BS=4096 00:13:33.380 00:17:47 ublk -- ublk/ublk.sh@28 -- # FILE_SIZE=134217728 00:13:33.380 00:17:47 ublk -- ublk/ublk.sh@29 -- # MAX_DEV_ID=3 00:13:33.380 00:17:47 ublk -- ublk/ublk.sh@133 -- # modprobe ublk_drv 00:13:33.380 00:17:48 ublk -- ublk/ublk.sh@136 -- # run_test test_save_ublk_config test_save_config 00:13:33.380 00:17:48 ublk -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:13:33.380 00:17:48 ublk -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:33.380 00:17:48 ublk -- common/autotest_common.sh@10 -- # set +x 00:13:33.380 ************************************ 00:13:33.380 START TEST test_save_ublk_config 00:13:33.380 ************************************ 00:13:33.380 00:17:48 ublk.test_save_ublk_config -- common/autotest_common.sh@1121 -- # test_save_config 00:13:33.380 00:17:48 ublk.test_save_ublk_config -- ublk/ublk.sh@100 -- # local tgtpid blkpath config 00:13:33.380 00:17:48 ublk.test_save_ublk_config -- ublk/ublk.sh@103 -- # tgtpid=86129 00:13:33.380 00:17:48 ublk.test_save_ublk_config -- ublk/ublk.sh@104 -- # trap 'killprocess $tgtpid' EXIT 00:13:33.380 00:17:48 ublk.test_save_ublk_config -- ublk/ublk.sh@102 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk 00:13:33.380 00:17:48 ublk.test_save_ublk_config -- ublk/ublk.sh@106 -- # waitforlisten 86129 00:13:33.380 00:17:48 ublk.test_save_ublk_config -- common/autotest_common.sh@827 -- # '[' -z 86129 ']' 00:13:33.380 00:17:48 ublk.test_save_ublk_config -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:33.380 00:17:48 ublk.test_save_ublk_config -- common/autotest_common.sh@832 -- # local max_retries=100 00:13:33.380 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:33.380 00:17:48 ublk.test_save_ublk_config -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:33.380 00:17:48 ublk.test_save_ublk_config -- common/autotest_common.sh@836 -- # xtrace_disable 00:13:33.380 00:17:48 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:13:33.639 [2024-07-23 00:17:48.126870] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:13:33.639 [2024-07-23 00:17:48.127003] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86129 ] 00:13:33.639 [2024-07-23 00:17:48.277325] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:33.899 [2024-07-23 00:17:48.330240] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:34.467 00:17:48 ublk.test_save_ublk_config -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:13:34.467 00:17:48 ublk.test_save_ublk_config -- common/autotest_common.sh@860 -- # return 0 00:13:34.467 00:17:48 ublk.test_save_ublk_config -- ublk/ublk.sh@107 -- # blkpath=/dev/ublkb0 00:13:34.467 00:17:48 ublk.test_save_ublk_config -- ublk/ublk.sh@108 -- # rpc_cmd 00:13:34.467 00:17:48 ublk.test_save_ublk_config -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:34.467 00:17:48 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:13:34.467 [2024-07-23 00:17:48.920291] ublk.c: 537:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:13:34.467 [2024-07-23 00:17:48.920574] ublk.c: 742:ublk_create_target: *NOTICE*: UBLK target created successfully 00:13:34.467 malloc0 00:13:34.467 [2024-07-23 00:17:48.951436] ublk.c:1908:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:13:34.467 [2024-07-23 00:17:48.951530] ublk.c:1949:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:13:34.467 [2024-07-23 00:17:48.951553] ublk.c: 955:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:13:34.467 [2024-07-23 00:17:48.951561] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:13:34.467 [2024-07-23 00:17:48.962309] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:13:34.467 [2024-07-23 00:17:48.962335] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:13:34.467 [2024-07-23 00:17:48.970296] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:13:34.467 [2024-07-23 00:17:48.970398] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:13:34.467 [2024-07-23 00:17:48.994299] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:13:34.467 0 00:13:34.467 00:17:49 ublk.test_save_ublk_config -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:34.467 00:17:49 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # rpc_cmd save_config 00:13:34.467 00:17:49 ublk.test_save_ublk_config -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:34.467 00:17:49 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:13:34.726 00:17:49 ublk.test_save_ublk_config -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:34.726 00:17:49 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # config='{ 00:13:34.726 "subsystems": [ 00:13:34.726 { 00:13:34.726 "subsystem": "keyring", 00:13:34.726 "config": [] 00:13:34.726 }, 00:13:34.726 { 00:13:34.726 "subsystem": "iobuf", 00:13:34.726 "config": [ 00:13:34.726 { 00:13:34.726 "method": "iobuf_set_options", 00:13:34.726 "params": { 00:13:34.726 "small_pool_count": 8192, 00:13:34.726 "large_pool_count": 1024, 00:13:34.726 "small_bufsize": 8192, 00:13:34.726 "large_bufsize": 135168 00:13:34.726 } 00:13:34.726 } 00:13:34.726 ] 00:13:34.726 }, 00:13:34.726 { 00:13:34.726 "subsystem": "sock", 00:13:34.726 "config": [ 00:13:34.726 { 00:13:34.726 "method": "sock_set_default_impl", 00:13:34.726 "params": { 00:13:34.726 "impl_name": "posix" 00:13:34.726 } 00:13:34.726 }, 00:13:34.726 { 00:13:34.726 "method": "sock_impl_set_options", 00:13:34.726 "params": { 00:13:34.726 "impl_name": "ssl", 00:13:34.726 "recv_buf_size": 4096, 00:13:34.726 "send_buf_size": 4096, 00:13:34.726 "enable_recv_pipe": true, 00:13:34.726 "enable_quickack": false, 00:13:34.726 "enable_placement_id": 0, 00:13:34.726 "enable_zerocopy_send_server": true, 00:13:34.726 "enable_zerocopy_send_client": false, 00:13:34.726 "zerocopy_threshold": 0, 00:13:34.726 "tls_version": 0, 00:13:34.726 "enable_ktls": false 00:13:34.726 } 00:13:34.726 }, 00:13:34.726 { 00:13:34.726 "method": "sock_impl_set_options", 00:13:34.726 "params": { 00:13:34.726 "impl_name": "posix", 00:13:34.726 "recv_buf_size": 2097152, 00:13:34.726 "send_buf_size": 2097152, 00:13:34.726 "enable_recv_pipe": true, 00:13:34.726 "enable_quickack": false, 00:13:34.726 "enable_placement_id": 0, 00:13:34.726 "enable_zerocopy_send_server": true, 00:13:34.726 "enable_zerocopy_send_client": false, 00:13:34.726 "zerocopy_threshold": 0, 00:13:34.726 "tls_version": 0, 00:13:34.726 "enable_ktls": false 00:13:34.726 } 00:13:34.726 } 00:13:34.726 ] 00:13:34.727 }, 00:13:34.727 { 00:13:34.727 "subsystem": "vmd", 00:13:34.727 "config": [] 00:13:34.727 }, 00:13:34.727 { 00:13:34.727 "subsystem": "accel", 00:13:34.727 "config": [ 00:13:34.727 { 00:13:34.727 "method": "accel_set_options", 00:13:34.727 "params": { 00:13:34.727 "small_cache_size": 128, 00:13:34.727 "large_cache_size": 16, 00:13:34.727 "task_count": 2048, 00:13:34.727 "sequence_count": 2048, 00:13:34.727 "buf_count": 2048 00:13:34.727 } 00:13:34.727 } 00:13:34.727 ] 00:13:34.727 }, 00:13:34.727 { 00:13:34.727 "subsystem": "bdev", 00:13:34.727 "config": [ 00:13:34.727 { 00:13:34.727 "method": "bdev_set_options", 00:13:34.727 "params": { 00:13:34.727 "bdev_io_pool_size": 65535, 00:13:34.727 "bdev_io_cache_size": 256, 00:13:34.727 "bdev_auto_examine": true, 00:13:34.727 "iobuf_small_cache_size": 128, 00:13:34.727 "iobuf_large_cache_size": 16 00:13:34.727 } 00:13:34.727 }, 00:13:34.727 { 00:13:34.727 "method": "bdev_raid_set_options", 00:13:34.727 "params": { 00:13:34.727 "process_window_size_kb": 1024 00:13:34.727 } 00:13:34.727 }, 00:13:34.727 { 00:13:34.727 "method": "bdev_iscsi_set_options", 00:13:34.727 "params": { 00:13:34.727 "timeout_sec": 30 00:13:34.727 } 00:13:34.727 }, 00:13:34.727 { 00:13:34.727 "method": "bdev_nvme_set_options", 00:13:34.727 "params": { 00:13:34.727 "action_on_timeout": "none", 00:13:34.727 "timeout_us": 0, 00:13:34.727 "timeout_admin_us": 0, 00:13:34.727 "keep_alive_timeout_ms": 10000, 00:13:34.727 "arbitration_burst": 0, 00:13:34.727 "low_priority_weight": 0, 00:13:34.727 "medium_priority_weight": 0, 00:13:34.727 "high_priority_weight": 0, 00:13:34.727 "nvme_adminq_poll_period_us": 10000, 00:13:34.727 "nvme_ioq_poll_period_us": 0, 00:13:34.727 "io_queue_requests": 0, 00:13:34.727 "delay_cmd_submit": true, 00:13:34.727 "transport_retry_count": 4, 00:13:34.727 "bdev_retry_count": 3, 00:13:34.727 "transport_ack_timeout": 0, 00:13:34.727 "ctrlr_loss_timeout_sec": 0, 00:13:34.727 "reconnect_delay_sec": 0, 00:13:34.727 "fast_io_fail_timeout_sec": 0, 00:13:34.727 "disable_auto_failback": false, 00:13:34.727 "generate_uuids": false, 00:13:34.727 "transport_tos": 0, 00:13:34.727 "nvme_error_stat": false, 00:13:34.727 "rdma_srq_size": 0, 00:13:34.727 "io_path_stat": false, 00:13:34.727 "allow_accel_sequence": false, 00:13:34.727 "rdma_max_cq_size": 0, 00:13:34.727 "rdma_cm_event_timeout_ms": 0, 00:13:34.727 "dhchap_digests": [ 00:13:34.727 "sha256", 00:13:34.727 "sha384", 00:13:34.727 "sha512" 00:13:34.727 ], 00:13:34.727 "dhchap_dhgroups": [ 00:13:34.727 "null", 00:13:34.727 "ffdhe2048", 00:13:34.727 "ffdhe3072", 00:13:34.727 "ffdhe4096", 00:13:34.727 "ffdhe6144", 00:13:34.727 "ffdhe8192" 00:13:34.727 ] 00:13:34.727 } 00:13:34.727 }, 00:13:34.727 { 00:13:34.727 "method": "bdev_nvme_set_hotplug", 00:13:34.727 "params": { 00:13:34.727 "period_us": 100000, 00:13:34.727 "enable": false 00:13:34.727 } 00:13:34.727 }, 00:13:34.727 { 00:13:34.727 "method": "bdev_malloc_create", 00:13:34.727 "params": { 00:13:34.727 "name": "malloc0", 00:13:34.727 "num_blocks": 8192, 00:13:34.727 "block_size": 4096, 00:13:34.727 "physical_block_size": 4096, 00:13:34.727 "uuid": "4954e3a8-2750-46d0-ba26-8eff970bae70", 00:13:34.727 "optimal_io_boundary": 0 00:13:34.727 } 00:13:34.727 }, 00:13:34.727 { 00:13:34.727 "method": "bdev_wait_for_examine" 00:13:34.727 } 00:13:34.727 ] 00:13:34.727 }, 00:13:34.727 { 00:13:34.727 "subsystem": "scsi", 00:13:34.727 "config": null 00:13:34.727 }, 00:13:34.727 { 00:13:34.727 "subsystem": "scheduler", 00:13:34.727 "config": [ 00:13:34.727 { 00:13:34.727 "method": "framework_set_scheduler", 00:13:34.727 "params": { 00:13:34.727 "name": "static" 00:13:34.727 } 00:13:34.727 } 00:13:34.727 ] 00:13:34.727 }, 00:13:34.727 { 00:13:34.727 "subsystem": "vhost_scsi", 00:13:34.727 "config": [] 00:13:34.727 }, 00:13:34.727 { 00:13:34.727 "subsystem": "vhost_blk", 00:13:34.727 "config": [] 00:13:34.727 }, 00:13:34.727 { 00:13:34.727 "subsystem": "ublk", 00:13:34.727 "config": [ 00:13:34.727 { 00:13:34.727 "method": "ublk_create_target", 00:13:34.727 "params": { 00:13:34.727 "cpumask": "1" 00:13:34.727 } 00:13:34.727 }, 00:13:34.727 { 00:13:34.727 "method": "ublk_start_disk", 00:13:34.727 "params": { 00:13:34.727 "bdev_name": "malloc0", 00:13:34.727 "ublk_id": 0, 00:13:34.727 "num_queues": 1, 00:13:34.727 "queue_depth": 128 00:13:34.727 } 00:13:34.727 } 00:13:34.727 ] 00:13:34.727 }, 00:13:34.727 { 00:13:34.727 "subsystem": "nbd", 00:13:34.727 "config": [] 00:13:34.727 }, 00:13:34.727 { 00:13:34.727 "subsystem": "nvmf", 00:13:34.727 "config": [ 00:13:34.727 { 00:13:34.727 "method": "nvmf_set_config", 00:13:34.727 "params": { 00:13:34.727 "discovery_filter": "match_any", 00:13:34.727 "admin_cmd_passthru": { 00:13:34.727 "identify_ctrlr": false 00:13:34.727 } 00:13:34.727 } 00:13:34.727 }, 00:13:34.727 { 00:13:34.727 "method": "nvmf_set_max_subsystems", 00:13:34.727 "params": { 00:13:34.727 "max_subsystems": 1024 00:13:34.727 } 00:13:34.727 }, 00:13:34.727 { 00:13:34.727 "method": "nvmf_set_crdt", 00:13:34.727 "params": { 00:13:34.727 "crdt1": 0, 00:13:34.727 "crdt2": 0, 00:13:34.727 "crdt3": 0 00:13:34.727 } 00:13:34.727 } 00:13:34.727 ] 00:13:34.727 }, 00:13:34.727 { 00:13:34.727 "subsystem": "iscsi", 00:13:34.727 "config": [ 00:13:34.727 { 00:13:34.727 "method": "iscsi_set_options", 00:13:34.727 "params": { 00:13:34.727 "node_base": "iqn.2016-06.io.spdk", 00:13:34.727 "max_sessions": 128, 00:13:34.727 "max_connections_per_session": 2, 00:13:34.727 "max_queue_depth": 64, 00:13:34.727 "default_time2wait": 2, 00:13:34.727 "default_time2retain": 20, 00:13:34.727 "first_burst_length": 8192, 00:13:34.727 "immediate_data": true, 00:13:34.727 "allow_duplicated_isid": false, 00:13:34.727 "error_recovery_level": 0, 00:13:34.727 "nop_timeout": 60, 00:13:34.727 "nop_in_interval": 30, 00:13:34.727 "disable_chap": false, 00:13:34.727 "require_chap": false, 00:13:34.727 "mutual_chap": false, 00:13:34.727 "chap_group": 0, 00:13:34.727 "max_large_datain_per_connection": 64, 00:13:34.727 "max_r2t_per_connection": 4, 00:13:34.727 "pdu_pool_size": 36864, 00:13:34.727 "immediate_data_pool_size": 16384, 00:13:34.727 "data_out_pool_size": 2048 00:13:34.727 } 00:13:34.727 } 00:13:34.727 ] 00:13:34.727 } 00:13:34.727 ] 00:13:34.727 }' 00:13:34.727 00:17:49 ublk.test_save_ublk_config -- ublk/ublk.sh@116 -- # killprocess 86129 00:13:34.727 00:17:49 ublk.test_save_ublk_config -- common/autotest_common.sh@946 -- # '[' -z 86129 ']' 00:13:34.727 00:17:49 ublk.test_save_ublk_config -- common/autotest_common.sh@950 -- # kill -0 86129 00:13:34.727 00:17:49 ublk.test_save_ublk_config -- common/autotest_common.sh@951 -- # uname 00:13:34.727 00:17:49 ublk.test_save_ublk_config -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:13:34.727 00:17:49 ublk.test_save_ublk_config -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 86129 00:13:34.727 00:17:49 ublk.test_save_ublk_config -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:13:34.727 killing process with pid 86129 00:13:34.727 00:17:49 ublk.test_save_ublk_config -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:13:34.727 00:17:49 ublk.test_save_ublk_config -- common/autotest_common.sh@964 -- # echo 'killing process with pid 86129' 00:13:34.727 00:17:49 ublk.test_save_ublk_config -- common/autotest_common.sh@965 -- # kill 86129 00:13:34.727 00:17:49 ublk.test_save_ublk_config -- common/autotest_common.sh@970 -- # wait 86129 00:13:34.986 [2024-07-23 00:17:49.585707] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:13:34.986 [2024-07-23 00:17:49.622313] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:13:34.986 [2024-07-23 00:17:49.622453] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:13:34.986 [2024-07-23 00:17:49.632284] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:13:34.986 [2024-07-23 00:17:49.632348] ublk.c: 969:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:13:34.986 [2024-07-23 00:17:49.632361] ublk.c:1803:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:13:34.986 [2024-07-23 00:17:49.632396] ublk.c: 819:_ublk_fini: *DEBUG*: finish shutdown 00:13:34.986 [2024-07-23 00:17:49.632546] ublk.c: 750:_ublk_fini_done: *DEBUG*: 00:13:35.244 00:17:49 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk -c /dev/fd/63 00:13:35.244 00:17:49 ublk.test_save_ublk_config -- ublk/ublk.sh@119 -- # tgtpid=86166 00:13:35.244 00:17:49 ublk.test_save_ublk_config -- ublk/ublk.sh@121 -- # waitforlisten 86166 00:13:35.244 00:17:49 ublk.test_save_ublk_config -- common/autotest_common.sh@827 -- # '[' -z 86166 ']' 00:13:35.244 00:17:49 ublk.test_save_ublk_config -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:35.244 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:35.244 00:17:49 ublk.test_save_ublk_config -- common/autotest_common.sh@832 -- # local max_retries=100 00:13:35.244 00:17:49 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # echo '{ 00:13:35.244 "subsystems": [ 00:13:35.244 { 00:13:35.244 "subsystem": "keyring", 00:13:35.244 "config": [] 00:13:35.244 }, 00:13:35.244 { 00:13:35.244 "subsystem": "iobuf", 00:13:35.244 "config": [ 00:13:35.244 { 00:13:35.244 "method": "iobuf_set_options", 00:13:35.244 "params": { 00:13:35.244 "small_pool_count": 8192, 00:13:35.244 "large_pool_count": 1024, 00:13:35.244 "small_bufsize": 8192, 00:13:35.244 "large_bufsize": 135168 00:13:35.244 } 00:13:35.244 } 00:13:35.244 ] 00:13:35.244 }, 00:13:35.244 { 00:13:35.244 "subsystem": "sock", 00:13:35.244 "config": [ 00:13:35.244 { 00:13:35.244 "method": "sock_set_default_impl", 00:13:35.244 "params": { 00:13:35.244 "impl_name": "posix" 00:13:35.244 } 00:13:35.244 }, 00:13:35.244 { 00:13:35.244 "method": "sock_impl_set_options", 00:13:35.244 "params": { 00:13:35.244 "impl_name": "ssl", 00:13:35.244 "recv_buf_size": 4096, 00:13:35.244 "send_buf_size": 4096, 00:13:35.244 "enable_recv_pipe": true, 00:13:35.244 "enable_quickack": false, 00:13:35.244 "enable_placement_id": 0, 00:13:35.244 "enable_zerocopy_send_server": true, 00:13:35.244 "enable_zerocopy_send_client": false, 00:13:35.244 "zerocopy_threshold": 0, 00:13:35.244 "tls_version": 0, 00:13:35.244 "enable_ktls": false 00:13:35.244 } 00:13:35.244 }, 00:13:35.244 { 00:13:35.244 "method": "sock_impl_set_options", 00:13:35.244 "params": { 00:13:35.244 "impl_name": "posix", 00:13:35.244 "recv_buf_size": 2097152, 00:13:35.244 "send_buf_size": 2097152, 00:13:35.244 "enable_recv_pipe": true, 00:13:35.244 "enable_quickack": false, 00:13:35.244 "enable_placement_id": 0, 00:13:35.244 "enable_zerocopy_send_server": true, 00:13:35.244 "enable_zerocopy_send_client": false, 00:13:35.244 "zerocopy_threshold": 0, 00:13:35.244 "tls_version": 0, 00:13:35.244 "enable_ktls": false 00:13:35.244 } 00:13:35.244 } 00:13:35.244 ] 00:13:35.244 }, 00:13:35.244 { 00:13:35.244 "subsystem": "vmd", 00:13:35.244 "config": [] 00:13:35.244 }, 00:13:35.244 { 00:13:35.244 "subsystem": "accel", 00:13:35.244 "config": [ 00:13:35.244 { 00:13:35.244 "method": "accel_set_options", 00:13:35.244 "params": { 00:13:35.244 "small_cache_size": 128, 00:13:35.244 "large_cache_size": 16, 00:13:35.244 "task_count": 2048, 00:13:35.244 "sequence_count": 2048, 00:13:35.244 "buf_count": 2048 00:13:35.244 } 00:13:35.244 } 00:13:35.244 ] 00:13:35.244 }, 00:13:35.244 { 00:13:35.244 "subsystem": "bdev", 00:13:35.244 "config": [ 00:13:35.244 { 00:13:35.244 "method": "bdev_set_options", 00:13:35.244 "params": { 00:13:35.244 "bdev_io_pool_size": 65535, 00:13:35.244 "bdev_io_cache_size": 256, 00:13:35.244 "bdev_auto_examine": true, 00:13:35.244 "iobuf_small_cache_size": 128, 00:13:35.244 "iobuf_large_cache_size": 16 00:13:35.244 } 00:13:35.244 }, 00:13:35.244 { 00:13:35.244 "method": "bdev_raid_set_options", 00:13:35.244 "params": { 00:13:35.244 "process_window_size_kb": 1024 00:13:35.244 } 00:13:35.244 }, 00:13:35.244 { 00:13:35.244 "method": "bdev_iscsi_set_options", 00:13:35.244 "params": { 00:13:35.244 "timeout_sec": 30 00:13:35.244 } 00:13:35.244 }, 00:13:35.244 { 00:13:35.244 "method": "bdev_nvme_set_options", 00:13:35.244 "params": { 00:13:35.244 "action_on_timeout": "none", 00:13:35.244 "timeout_us": 0, 00:13:35.244 "timeout_admin_us": 0, 00:13:35.244 "keep_alive_timeout_ms": 10000, 00:13:35.244 "arbitration_burst": 0, 00:13:35.244 "low_priority_weight": 0, 00:13:35.244 "medium_priority_weight": 0, 00:13:35.244 "high_priority_weight": 0, 00:13:35.244 "nvme_adminq_poll_period_us": 10000, 00:13:35.244 "nvme_ioq_poll_period_us": 0, 00:13:35.244 "io_queue_requests": 0, 00:13:35.244 "delay_cmd_submit": true, 00:13:35.244 "transport_retry_count": 4, 00:13:35.244 "bdev_retry_count": 3, 00:13:35.244 "transport_ack_timeout": 0, 00:13:35.244 "ctrlr_loss_timeout_sec": 0, 00:13:35.244 "reconnect_delay_sec": 0, 00:13:35.244 "fast_io_fail_timeout_sec": 0, 00:13:35.244 "disable_auto_failback": false, 00:13:35.244 "generate_uuids": false, 00:13:35.244 "transport_tos": 0, 00:13:35.244 "nvme_error_stat": false, 00:13:35.244 "rdma_srq_size": 0, 00:13:35.244 "io_path_stat": false, 00:13:35.244 "allow_accel_sequence": false, 00:13:35.244 "rdma_max_cq_size": 0, 00:13:35.244 "rdma_cm_event_timeout_ms": 0, 00:13:35.244 "dhchap_digests": [ 00:13:35.244 "sha256", 00:13:35.244 "sha384", 00:13:35.244 "sha512" 00:13:35.244 ], 00:13:35.244 "dhchap_dhgroups": [ 00:13:35.244 "null", 00:13:35.244 "ffdhe2048", 00:13:35.244 "ffdhe3072", 00:13:35.244 "ffdhe4096", 00:13:35.244 "ffdhe6144", 00:13:35.244 "ffdhe8192" 00:13:35.244 ] 00:13:35.244 } 00:13:35.244 }, 00:13:35.244 { 00:13:35.244 "method": "bdev_nvme_set_hotplug", 00:13:35.244 "params": { 00:13:35.244 "period_us": 100000, 00:13:35.244 "enable": false 00:13:35.244 } 00:13:35.244 }, 00:13:35.244 { 00:13:35.244 "method": "bdev_malloc_create", 00:13:35.244 "params": { 00:13:35.244 "name": "malloc0", 00:13:35.244 "num_blocks": 8192, 00:13:35.244 "block_size": 4096, 00:13:35.244 "physical_block_size": 4096, 00:13:35.244 "uuid": "4954e3a8-2750-46d0-ba26-8eff970bae70", 00:13:35.244 "optimal_io_boundary": 0 00:13:35.244 } 00:13:35.244 }, 00:13:35.244 { 00:13:35.244 "method": "bdev_wait_for_examine" 00:13:35.244 } 00:13:35.244 ] 00:13:35.244 }, 00:13:35.244 { 00:13:35.244 "subsystem": "scsi", 00:13:35.244 "config": null 00:13:35.244 }, 00:13:35.244 { 00:13:35.244 "subsystem": "scheduler", 00:13:35.244 "config": [ 00:13:35.244 { 00:13:35.244 "method": "framework_set_scheduler", 00:13:35.244 "params": { 00:13:35.244 "name": "static" 00:13:35.244 } 00:13:35.244 } 00:13:35.244 ] 00:13:35.244 }, 00:13:35.244 { 00:13:35.244 "subsystem": "vhost_scsi", 00:13:35.244 "config": [] 00:13:35.244 }, 00:13:35.244 { 00:13:35.244 "subsystem": "vhost_blk", 00:13:35.244 "config": [] 00:13:35.244 }, 00:13:35.244 { 00:13:35.244 "subsystem": "ublk", 00:13:35.244 "config": [ 00:13:35.244 { 00:13:35.244 "method": "ublk_create_target", 00:13:35.244 "params": { 00:13:35.244 "cpumask": "1" 00:13:35.244 } 00:13:35.244 }, 00:13:35.244 { 00:13:35.244 "method": "ublk_start_disk", 00:13:35.244 "params": { 00:13:35.244 "bdev_name": "malloc0", 00:13:35.244 "ublk_id": 0, 00:13:35.244 "num_queues": 1, 00:13:35.244 "queue_depth": 128 00:13:35.244 } 00:13:35.244 } 00:13:35.244 ] 00:13:35.244 }, 00:13:35.244 { 00:13:35.244 "subsystem": "nbd", 00:13:35.244 "config": [] 00:13:35.244 }, 00:13:35.244 { 00:13:35.244 "subsystem": "nvmf", 00:13:35.244 "config": [ 00:13:35.244 { 00:13:35.244 "method": "nvmf_set_config", 00:13:35.245 "params": { 00:13:35.245 "discovery_filter": "match_any", 00:13:35.245 "admin_cmd_passthru": { 00:13:35.245 "identify_ctrlr": false 00:13:35.245 } 00:13:35.245 } 00:13:35.245 }, 00:13:35.245 { 00:13:35.245 "method": "nvmf_set_max_subsystems", 00:13:35.245 "params": { 00:13:35.245 "max_subsystems": 1024 00:13:35.245 } 00:13:35.245 }, 00:13:35.245 { 00:13:35.245 "method": "nvmf_set_crdt", 00:13:35.245 "params": { 00:13:35.245 "crdt1": 0, 00:13:35.245 "crdt2": 0, 00:13:35.245 "crdt3": 0 00:13:35.245 } 00:13:35.245 } 00:13:35.245 ] 00:13:35.245 }, 00:13:35.245 { 00:13:35.245 "subsystem": "iscsi", 00:13:35.245 "config": [ 00:13:35.245 { 00:13:35.245 "method": "iscsi_set_options", 00:13:35.245 "params": { 00:13:35.245 "node_base": "iqn.2016-06.io.spdk", 00:13:35.245 "max_sessions": 128, 00:13:35.245 "max_connections_per_session": 2, 00:13:35.245 "max_queue_depth": 64, 00:13:35.245 "default_time2wait": 2, 00:13:35.245 "default_time2retain": 20, 00:13:35.245 "first_burst_length": 8192, 00:13:35.245 "immediate_data": true, 00:13:35.245 "allow_duplicated_isid": false, 00:13:35.245 "error_recovery_level": 0, 00:13:35.245 "nop_timeout": 60, 00:13:35.245 "nop_in_interval": 30, 00:13:35.245 "disable_chap": false, 00:13:35.245 "require_chap": false, 00:13:35.245 "mutual_chap": false, 00:13:35.245 "chap_group": 0, 00:13:35.245 "max_large_datain_per_connection": 64, 00:13:35.245 "max_r2t_per_connection": 4, 00:13:35.245 "pdu_pool_size": 36864, 00:13:35.245 "immediate_data_pool_size": 16384, 00:13:35.245 "data_out_pool_size": 2048 00:13:35.245 } 00:13:35.245 } 00:13:35.245 ] 00:13:35.245 } 00:13:35.245 ] 00:13:35.245 }' 00:13:35.245 00:17:49 ublk.test_save_ublk_config -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:35.245 00:17:49 ublk.test_save_ublk_config -- common/autotest_common.sh@836 -- # xtrace_disable 00:13:35.245 00:17:49 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:13:35.504 [2024-07-23 00:17:49.973522] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:13:35.504 [2024-07-23 00:17:49.973654] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86166 ] 00:13:35.504 [2024-07-23 00:17:50.122152] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:35.504 [2024-07-23 00:17:50.176485] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:36.072 [2024-07-23 00:17:50.506281] ublk.c: 537:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:13:36.072 [2024-07-23 00:17:50.506571] ublk.c: 742:ublk_create_target: *NOTICE*: UBLK target created successfully 00:13:36.072 [2024-07-23 00:17:50.513415] ublk.c:1908:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:13:36.072 [2024-07-23 00:17:50.513488] ublk.c:1949:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:13:36.072 [2024-07-23 00:17:50.513501] ublk.c: 955:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:13:36.072 [2024-07-23 00:17:50.513509] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:13:36.072 [2024-07-23 00:17:50.522356] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:13:36.072 [2024-07-23 00:17:50.522378] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:13:36.072 [2024-07-23 00:17:50.529296] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:13:36.072 [2024-07-23 00:17:50.529390] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:13:36.072 [2024-07-23 00:17:50.546287] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:13:36.330 00:17:50 ublk.test_save_ublk_config -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:13:36.330 00:17:50 ublk.test_save_ublk_config -- common/autotest_common.sh@860 -- # return 0 00:13:36.330 00:17:50 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # rpc_cmd ublk_get_disks 00:13:36.330 00:17:50 ublk.test_save_ublk_config -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:36.330 00:17:50 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:13:36.330 00:17:50 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # jq -r '.[0].ublk_device' 00:13:36.330 00:17:50 ublk.test_save_ublk_config -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:36.330 00:17:50 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # [[ /dev/ublkb0 == \/\d\e\v\/\u\b\l\k\b\0 ]] 00:13:36.330 00:17:50 ublk.test_save_ublk_config -- ublk/ublk.sh@123 -- # [[ -b /dev/ublkb0 ]] 00:13:36.330 00:17:50 ublk.test_save_ublk_config -- ublk/ublk.sh@125 -- # killprocess 86166 00:13:36.330 00:17:50 ublk.test_save_ublk_config -- common/autotest_common.sh@946 -- # '[' -z 86166 ']' 00:13:36.330 00:17:50 ublk.test_save_ublk_config -- common/autotest_common.sh@950 -- # kill -0 86166 00:13:36.330 00:17:50 ublk.test_save_ublk_config -- common/autotest_common.sh@951 -- # uname 00:13:36.330 00:17:50 ublk.test_save_ublk_config -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:13:36.330 00:17:50 ublk.test_save_ublk_config -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 86166 00:13:36.330 killing process with pid 86166 00:13:36.330 00:17:50 ublk.test_save_ublk_config -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:13:36.330 00:17:50 ublk.test_save_ublk_config -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:13:36.330 00:17:50 ublk.test_save_ublk_config -- common/autotest_common.sh@964 -- # echo 'killing process with pid 86166' 00:13:36.330 00:17:50 ublk.test_save_ublk_config -- common/autotest_common.sh@965 -- # kill 86166 00:13:36.330 00:17:50 ublk.test_save_ublk_config -- common/autotest_common.sh@970 -- # wait 86166 00:13:36.589 [2024-07-23 00:17:51.124264] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:13:36.589 [2024-07-23 00:17:51.163318] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:13:36.589 [2024-07-23 00:17:51.163483] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:13:36.589 [2024-07-23 00:17:51.171293] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:13:36.589 [2024-07-23 00:17:51.171347] ublk.c: 969:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:13:36.589 [2024-07-23 00:17:51.171357] ublk.c:1803:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:13:36.589 [2024-07-23 00:17:51.171383] ublk.c: 819:_ublk_fini: *DEBUG*: finish shutdown 00:13:36.589 [2024-07-23 00:17:51.171530] ublk.c: 750:_ublk_fini_done: *DEBUG*: 00:13:36.847 00:17:51 ublk.test_save_ublk_config -- ublk/ublk.sh@126 -- # trap - EXIT 00:13:36.847 00:13:36.847 real 0m3.402s 00:13:36.847 user 0m2.520s 00:13:36.847 sys 0m1.517s 00:13:36.847 00:17:51 ublk.test_save_ublk_config -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:36.847 ************************************ 00:13:36.847 END TEST test_save_ublk_config 00:13:36.847 ************************************ 00:13:36.847 00:17:51 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:13:36.847 00:17:51 ublk -- ublk/ublk.sh@139 -- # spdk_pid=86212 00:13:36.847 00:17:51 ublk -- ublk/ublk.sh@138 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:13:36.847 00:17:51 ublk -- ublk/ublk.sh@140 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:36.847 00:17:51 ublk -- ublk/ublk.sh@141 -- # waitforlisten 86212 00:13:36.847 00:17:51 ublk -- common/autotest_common.sh@827 -- # '[' -z 86212 ']' 00:13:36.847 00:17:51 ublk -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:36.847 00:17:51 ublk -- common/autotest_common.sh@832 -- # local max_retries=100 00:13:36.847 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:36.847 00:17:51 ublk -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:36.847 00:17:51 ublk -- common/autotest_common.sh@836 -- # xtrace_disable 00:13:36.847 00:17:51 ublk -- common/autotest_common.sh@10 -- # set +x 00:13:37.105 [2024-07-23 00:17:51.578379] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:13:37.105 [2024-07-23 00:17:51.578511] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86212 ] 00:13:37.105 [2024-07-23 00:17:51.730584] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:37.105 [2024-07-23 00:17:51.775482] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:37.105 [2024-07-23 00:17:51.775579] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:38.040 00:17:52 ublk -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:13:38.040 00:17:52 ublk -- common/autotest_common.sh@860 -- # return 0 00:13:38.040 00:17:52 ublk -- ublk/ublk.sh@143 -- # run_test test_create_ublk test_create_ublk 00:13:38.040 00:17:52 ublk -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:13:38.040 00:17:52 ublk -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:38.040 00:17:52 ublk -- common/autotest_common.sh@10 -- # set +x 00:13:38.040 ************************************ 00:13:38.040 START TEST test_create_ublk 00:13:38.040 ************************************ 00:13:38.040 00:17:52 ublk.test_create_ublk -- common/autotest_common.sh@1121 -- # test_create_ublk 00:13:38.040 00:17:52 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # rpc_cmd ublk_create_target 00:13:38.040 00:17:52 ublk.test_create_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:38.040 00:17:52 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:38.040 [2024-07-23 00:17:52.382286] ublk.c: 537:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:13:38.040 [2024-07-23 00:17:52.383624] ublk.c: 742:ublk_create_target: *NOTICE*: UBLK target created successfully 00:13:38.040 00:17:52 ublk.test_create_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:38.040 00:17:52 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # ublk_target= 00:13:38.040 00:17:52 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # rpc_cmd bdev_malloc_create 128 4096 00:13:38.040 00:17:52 ublk.test_create_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:38.040 00:17:52 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:38.040 00:17:52 ublk.test_create_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:38.040 00:17:52 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # malloc_name=Malloc0 00:13:38.040 00:17:52 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:13:38.040 00:17:52 ublk.test_create_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:38.040 00:17:52 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:38.040 [2024-07-23 00:17:52.453426] ublk.c:1908:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:13:38.040 [2024-07-23 00:17:52.453874] ublk.c:1949:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:13:38.040 [2024-07-23 00:17:52.453897] ublk.c: 955:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:13:38.040 [2024-07-23 00:17:52.453907] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:13:38.040 [2024-07-23 00:17:52.462529] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:13:38.040 [2024-07-23 00:17:52.462554] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:13:38.040 [2024-07-23 00:17:52.469297] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:13:38.040 [2024-07-23 00:17:52.476343] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:13:38.040 [2024-07-23 00:17:52.498293] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:13:38.040 00:17:52 ublk.test_create_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:38.040 00:17:52 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # ublk_id=0 00:13:38.040 00:17:52 ublk.test_create_ublk -- ublk/ublk.sh@38 -- # ublk_path=/dev/ublkb0 00:13:38.040 00:17:52 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # rpc_cmd ublk_get_disks -n 0 00:13:38.040 00:17:52 ublk.test_create_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:38.040 00:17:52 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:38.040 00:17:52 ublk.test_create_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:38.040 00:17:52 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # ublk_dev='[ 00:13:38.040 { 00:13:38.040 "ublk_device": "/dev/ublkb0", 00:13:38.040 "id": 0, 00:13:38.040 "queue_depth": 512, 00:13:38.040 "num_queues": 4, 00:13:38.040 "bdev_name": "Malloc0" 00:13:38.040 } 00:13:38.040 ]' 00:13:38.040 00:17:52 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # jq -r '.[0].ublk_device' 00:13:38.041 00:17:52 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:13:38.041 00:17:52 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # jq -r '.[0].id' 00:13:38.041 00:17:52 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # [[ 0 = \0 ]] 00:13:38.041 00:17:52 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # jq -r '.[0].queue_depth' 00:13:38.041 00:17:52 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # [[ 512 = \5\1\2 ]] 00:13:38.041 00:17:52 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # jq -r '.[0].num_queues' 00:13:38.041 00:17:52 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # [[ 4 = \4 ]] 00:13:38.041 00:17:52 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # jq -r '.[0].bdev_name' 00:13:38.299 00:17:52 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:13:38.299 00:17:52 ublk.test_create_ublk -- ublk/ublk.sh@48 -- # run_fio_test /dev/ublkb0 0 134217728 write 0xcc '--time_based --runtime=10' 00:13:38.299 00:17:52 ublk.test_create_ublk -- lvol/common.sh@40 -- # local file=/dev/ublkb0 00:13:38.299 00:17:52 ublk.test_create_ublk -- lvol/common.sh@41 -- # local offset=0 00:13:38.299 00:17:52 ublk.test_create_ublk -- lvol/common.sh@42 -- # local size=134217728 00:13:38.299 00:17:52 ublk.test_create_ublk -- lvol/common.sh@43 -- # local rw=write 00:13:38.299 00:17:52 ublk.test_create_ublk -- lvol/common.sh@44 -- # local pattern=0xcc 00:13:38.299 00:17:52 ublk.test_create_ublk -- lvol/common.sh@45 -- # local 'extra_params=--time_based --runtime=10' 00:13:38.299 00:17:52 ublk.test_create_ublk -- lvol/common.sh@47 -- # local pattern_template= fio_template= 00:13:38.299 00:17:52 ublk.test_create_ublk -- lvol/common.sh@48 -- # [[ -n 0xcc ]] 00:13:38.299 00:17:52 ublk.test_create_ublk -- lvol/common.sh@49 -- # pattern_template='--do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:13:38.299 00:17:52 ublk.test_create_ublk -- lvol/common.sh@52 -- # fio_template='fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:13:38.299 00:17:52 ublk.test_create_ublk -- lvol/common.sh@53 -- # fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0 00:13:38.299 fio: verification read phase will never start because write phase uses all of runtime 00:13:38.299 fio_test: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=psync, iodepth=1 00:13:38.299 fio-3.35 00:13:38.299 Starting 1 process 00:13:48.313 00:13:48.313 fio_test: (groupid=0, jobs=1): err= 0: pid=86257: Tue Jul 23 00:18:02 2024 00:13:48.313 write: IOPS=15.1k, BW=58.8MiB/s (61.7MB/s)(589MiB/10001msec); 0 zone resets 00:13:48.313 clat (usec): min=37, max=3996, avg=65.57, stdev=103.22 00:13:48.313 lat (usec): min=38, max=3996, avg=66.00, stdev=103.23 00:13:48.313 clat percentiles (usec): 00:13:48.313 | 1.00th=[ 40], 5.00th=[ 40], 10.00th=[ 52], 20.00th=[ 55], 00:13:48.313 | 30.00th=[ 55], 40.00th=[ 56], 50.00th=[ 57], 60.00th=[ 58], 00:13:48.313 | 70.00th=[ 60], 80.00th=[ 64], 90.00th=[ 89], 95.00th=[ 94], 00:13:48.313 | 99.00th=[ 133], 99.50th=[ 147], 99.90th=[ 2008], 99.95th=[ 2999], 00:13:48.313 | 99.99th=[ 3687] 00:13:48.313 bw ( KiB/s): min=34656, max=76920, per=100.00%, avg=61244.53, stdev=11473.05, samples=19 00:13:48.313 iops : min= 8664, max=19230, avg=15311.11, stdev=2868.25, samples=19 00:13:48.313 lat (usec) : 50=8.10%, 100=89.24%, 250=2.45%, 500=0.01%, 750=0.01% 00:13:48.313 lat (usec) : 1000=0.02% 00:13:48.313 lat (msec) : 2=0.07%, 4=0.10% 00:13:48.313 cpu : usr=2.81%, sys=9.86%, ctx=150670, majf=0, minf=796 00:13:48.313 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:48.313 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:48.313 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:48.313 issued rwts: total=0,150669,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:48.313 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:48.313 00:13:48.313 Run status group 0 (all jobs): 00:13:48.313 WRITE: bw=58.8MiB/s (61.7MB/s), 58.8MiB/s-58.8MiB/s (61.7MB/s-61.7MB/s), io=589MiB (617MB), run=10001-10001msec 00:13:48.313 00:13:48.313 Disk stats (read/write): 00:13:48.313 ublkb0: ios=0/149556, merge=0/0, ticks=0/8681, in_queue=8682, util=99.12% 00:13:48.313 00:18:02 ublk.test_create_ublk -- ublk/ublk.sh@51 -- # rpc_cmd ublk_stop_disk 0 00:13:48.313 00:18:02 ublk.test_create_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:48.313 00:18:02 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:48.573 [2024-07-23 00:18:02.997805] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:13:48.573 [2024-07-23 00:18:03.027844] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:13:48.573 [2024-07-23 00:18:03.032952] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:13:48.573 [2024-07-23 00:18:03.042393] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:13:48.573 [2024-07-23 00:18:03.042795] ublk.c: 969:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:13:48.573 [2024-07-23 00:18:03.042811] ublk.c:1803:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:13:48.573 00:18:03 ublk.test_create_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:48.573 00:18:03 ublk.test_create_ublk -- ublk/ublk.sh@53 -- # NOT rpc_cmd ublk_stop_disk 0 00:13:48.573 00:18:03 ublk.test_create_ublk -- common/autotest_common.sh@648 -- # local es=0 00:13:48.573 00:18:03 ublk.test_create_ublk -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd ublk_stop_disk 0 00:13:48.573 00:18:03 ublk.test_create_ublk -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:13:48.573 00:18:03 ublk.test_create_ublk -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:48.573 00:18:03 ublk.test_create_ublk -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:13:48.573 00:18:03 ublk.test_create_ublk -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:48.573 00:18:03 ublk.test_create_ublk -- common/autotest_common.sh@651 -- # rpc_cmd ublk_stop_disk 0 00:13:48.573 00:18:03 ublk.test_create_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:48.573 00:18:03 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:48.573 [2024-07-23 00:18:03.053395] ublk.c:1071:ublk_stop_disk: *ERROR*: no ublk dev with ublk_id=0 00:13:48.573 request: 00:13:48.573 { 00:13:48.573 "ublk_id": 0, 00:13:48.573 "method": "ublk_stop_disk", 00:13:48.573 "req_id": 1 00:13:48.573 } 00:13:48.573 Got JSON-RPC error response 00:13:48.573 response: 00:13:48.573 { 00:13:48.573 "code": -19, 00:13:48.573 "message": "No such device" 00:13:48.573 } 00:13:48.573 00:18:03 ublk.test_create_ublk -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:13:48.573 00:18:03 ublk.test_create_ublk -- common/autotest_common.sh@651 -- # es=1 00:13:48.573 00:18:03 ublk.test_create_ublk -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:13:48.573 00:18:03 ublk.test_create_ublk -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:13:48.573 00:18:03 ublk.test_create_ublk -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:13:48.573 00:18:03 ublk.test_create_ublk -- ublk/ublk.sh@54 -- # rpc_cmd ublk_destroy_target 00:13:48.573 00:18:03 ublk.test_create_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:48.573 00:18:03 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:48.573 [2024-07-23 00:18:03.074421] ublk.c: 819:_ublk_fini: *DEBUG*: finish shutdown 00:13:48.573 [2024-07-23 00:18:03.077773] ublk.c: 750:_ublk_fini_done: *DEBUG*: 00:13:48.573 [2024-07-23 00:18:03.077812] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:13:48.573 00:18:03 ublk.test_create_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:48.573 00:18:03 ublk.test_create_ublk -- ublk/ublk.sh@56 -- # rpc_cmd bdev_malloc_delete Malloc0 00:13:48.573 00:18:03 ublk.test_create_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:48.573 00:18:03 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:48.573 00:18:03 ublk.test_create_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:48.573 00:18:03 ublk.test_create_ublk -- ublk/ublk.sh@57 -- # check_leftover_devices 00:13:48.573 00:18:03 ublk.test_create_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:13:48.573 00:18:03 ublk.test_create_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:48.573 00:18:03 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:48.573 00:18:03 ublk.test_create_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:48.573 00:18:03 ublk.test_create_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:13:48.573 00:18:03 ublk.test_create_ublk -- lvol/common.sh@26 -- # jq length 00:13:48.573 00:18:03 ublk.test_create_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:13:48.573 00:18:03 ublk.test_create_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:13:48.573 00:18:03 ublk.test_create_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:48.573 00:18:03 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:48.573 00:18:03 ublk.test_create_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:48.573 00:18:03 ublk.test_create_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:13:48.573 00:18:03 ublk.test_create_ublk -- lvol/common.sh@28 -- # jq length 00:13:48.573 00:18:03 ublk.test_create_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:13:48.573 00:13:48.574 real 0m10.875s 00:13:48.574 user 0m0.681s 00:13:48.574 sys 0m1.101s 00:13:48.574 00:18:03 ublk.test_create_ublk -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:48.574 00:18:03 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:48.574 ************************************ 00:13:48.574 END TEST test_create_ublk 00:13:48.574 ************************************ 00:13:48.832 00:18:03 ublk -- ublk/ublk.sh@144 -- # run_test test_create_multi_ublk test_create_multi_ublk 00:13:48.832 00:18:03 ublk -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:13:48.832 00:18:03 ublk -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:48.832 00:18:03 ublk -- common/autotest_common.sh@10 -- # set +x 00:13:48.832 ************************************ 00:13:48.832 START TEST test_create_multi_ublk 00:13:48.832 ************************************ 00:13:48.832 00:18:03 ublk.test_create_multi_ublk -- common/autotest_common.sh@1121 -- # test_create_multi_ublk 00:13:48.832 00:18:03 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # rpc_cmd ublk_create_target 00:13:48.832 00:18:03 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:48.832 00:18:03 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:48.832 [2024-07-23 00:18:03.331284] ublk.c: 537:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:13:48.832 [2024-07-23 00:18:03.332568] ublk.c: 742:ublk_create_target: *NOTICE*: UBLK target created successfully 00:13:48.832 00:18:03 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:48.832 00:18:03 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # ublk_target= 00:13:48.832 00:18:03 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # seq 0 3 00:13:48.832 00:18:03 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:48.832 00:18:03 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc0 128 4096 00:13:48.832 00:18:03 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:48.832 00:18:03 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:48.832 00:18:03 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:48.832 00:18:03 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc0 00:13:48.832 00:18:03 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:13:48.832 00:18:03 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:48.832 00:18:03 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:48.832 [2024-07-23 00:18:03.419464] ublk.c:1908:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:13:48.832 [2024-07-23 00:18:03.419936] ublk.c:1949:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:13:48.832 [2024-07-23 00:18:03.419954] ublk.c: 955:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:13:48.832 [2024-07-23 00:18:03.419965] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:13:48.832 [2024-07-23 00:18:03.428610] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:13:48.832 [2024-07-23 00:18:03.428637] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:13:48.832 [2024-07-23 00:18:03.435303] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:13:48.832 [2024-07-23 00:18:03.435854] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:13:48.832 [2024-07-23 00:18:03.446224] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:13:48.832 00:18:03 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:48.832 00:18:03 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=0 00:13:48.832 00:18:03 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:48.832 00:18:03 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc1 128 4096 00:13:48.832 00:18:03 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:48.832 00:18:03 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:49.091 00:18:03 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:49.091 00:18:03 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc1 00:13:49.091 00:18:03 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc1 1 -q 4 -d 512 00:13:49.091 00:18:03 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:49.091 00:18:03 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:49.091 [2024-07-23 00:18:03.532472] ublk.c:1908:ublk_start_disk: *DEBUG*: ublk1: bdev Malloc1 num_queues 4 queue_depth 512 00:13:49.091 [2024-07-23 00:18:03.532918] ublk.c:1949:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc1 via ublk 1 00:13:49.091 [2024-07-23 00:18:03.532940] ublk.c: 955:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:13:49.091 [2024-07-23 00:18:03.532950] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:13:49.091 [2024-07-23 00:18:03.540360] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:13:49.091 [2024-07-23 00:18:03.540382] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:13:49.091 [2024-07-23 00:18:03.548320] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:13:49.091 [2024-07-23 00:18:03.548877] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:13:49.091 [2024-07-23 00:18:03.557385] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:13:49.091 00:18:03 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:49.091 00:18:03 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=1 00:13:49.091 00:18:03 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:49.091 00:18:03 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc2 128 4096 00:13:49.091 00:18:03 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:49.091 00:18:03 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:49.091 00:18:03 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:49.091 00:18:03 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc2 00:13:49.092 00:18:03 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc2 2 -q 4 -d 512 00:13:49.092 00:18:03 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:49.092 00:18:03 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:49.092 [2024-07-23 00:18:03.643445] ublk.c:1908:ublk_start_disk: *DEBUG*: ublk2: bdev Malloc2 num_queues 4 queue_depth 512 00:13:49.092 [2024-07-23 00:18:03.643952] ublk.c:1949:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc2 via ublk 2 00:13:49.092 [2024-07-23 00:18:03.643970] ublk.c: 955:ublk_dev_list_register: *DEBUG*: ublk2: add to tailq 00:13:49.092 [2024-07-23 00:18:03.643982] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV 00:13:49.092 [2024-07-23 00:18:03.652582] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV completed 00:13:49.092 [2024-07-23 00:18:03.652609] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS 00:13:49.092 [2024-07-23 00:18:03.659323] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:13:49.092 [2024-07-23 00:18:03.659884] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV 00:13:49.092 [2024-07-23 00:18:03.668387] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV completed 00:13:49.092 00:18:03 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:49.092 00:18:03 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=2 00:13:49.092 00:18:03 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:49.092 00:18:03 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc3 128 4096 00:13:49.092 00:18:03 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:49.092 00:18:03 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:49.092 00:18:03 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:49.092 00:18:03 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc3 00:13:49.092 00:18:03 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc3 3 -q 4 -d 512 00:13:49.092 00:18:03 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:49.092 00:18:03 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:49.092 [2024-07-23 00:18:03.755444] ublk.c:1908:ublk_start_disk: *DEBUG*: ublk3: bdev Malloc3 num_queues 4 queue_depth 512 00:13:49.092 [2024-07-23 00:18:03.755896] ublk.c:1949:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc3 via ublk 3 00:13:49.092 [2024-07-23 00:18:03.755917] ublk.c: 955:ublk_dev_list_register: *DEBUG*: ublk3: add to tailq 00:13:49.092 [2024-07-23 00:18:03.755926] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV 00:13:49.092 [2024-07-23 00:18:03.764590] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV completed 00:13:49.092 [2024-07-23 00:18:03.764617] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS 00:13:49.092 [2024-07-23 00:18:03.771318] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:13:49.092 [2024-07-23 00:18:03.771936] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV 00:13:49.351 [2024-07-23 00:18:03.780364] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV completed 00:13:49.351 00:18:03 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:49.351 00:18:03 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=3 00:13:49.351 00:18:03 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # rpc_cmd ublk_get_disks 00:13:49.351 00:18:03 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:49.351 00:18:03 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:49.351 00:18:03 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:49.351 00:18:03 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # ublk_dev='[ 00:13:49.351 { 00:13:49.351 "ublk_device": "/dev/ublkb0", 00:13:49.351 "id": 0, 00:13:49.351 "queue_depth": 512, 00:13:49.351 "num_queues": 4, 00:13:49.351 "bdev_name": "Malloc0" 00:13:49.351 }, 00:13:49.351 { 00:13:49.351 "ublk_device": "/dev/ublkb1", 00:13:49.351 "id": 1, 00:13:49.351 "queue_depth": 512, 00:13:49.351 "num_queues": 4, 00:13:49.351 "bdev_name": "Malloc1" 00:13:49.351 }, 00:13:49.351 { 00:13:49.351 "ublk_device": "/dev/ublkb2", 00:13:49.351 "id": 2, 00:13:49.351 "queue_depth": 512, 00:13:49.351 "num_queues": 4, 00:13:49.351 "bdev_name": "Malloc2" 00:13:49.351 }, 00:13:49.351 { 00:13:49.351 "ublk_device": "/dev/ublkb3", 00:13:49.351 "id": 3, 00:13:49.351 "queue_depth": 512, 00:13:49.351 "num_queues": 4, 00:13:49.351 "bdev_name": "Malloc3" 00:13:49.351 } 00:13:49.351 ]' 00:13:49.351 00:18:03 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # seq 0 3 00:13:49.351 00:18:03 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:49.351 00:18:03 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[0].ublk_device' 00:13:49.351 00:18:03 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:13:49.351 00:18:03 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[0].id' 00:13:49.351 00:18:03 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 0 = \0 ]] 00:13:49.351 00:18:03 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[0].queue_depth' 00:13:49.351 00:18:03 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:13:49.351 00:18:03 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[0].num_queues' 00:13:49.351 00:18:03 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:13:49.351 00:18:03 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[0].bdev_name' 00:13:49.351 00:18:04 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:13:49.351 00:18:04 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:49.351 00:18:04 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[1].ublk_device' 00:13:49.610 00:18:04 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb1 = \/\d\e\v\/\u\b\l\k\b\1 ]] 00:13:49.610 00:18:04 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[1].id' 00:13:49.610 00:18:04 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 1 = \1 ]] 00:13:49.610 00:18:04 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[1].queue_depth' 00:13:49.610 00:18:04 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:13:49.610 00:18:04 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[1].num_queues' 00:13:49.610 00:18:04 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:13:49.610 00:18:04 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[1].bdev_name' 00:13:49.610 00:18:04 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc1 = \M\a\l\l\o\c\1 ]] 00:13:49.610 00:18:04 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:49.610 00:18:04 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[2].ublk_device' 00:13:49.610 00:18:04 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb2 = \/\d\e\v\/\u\b\l\k\b\2 ]] 00:13:49.610 00:18:04 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[2].id' 00:13:49.869 00:18:04 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 2 = \2 ]] 00:13:49.869 00:18:04 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[2].queue_depth' 00:13:49.869 00:18:04 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:13:49.869 00:18:04 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[2].num_queues' 00:13:49.869 00:18:04 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:13:49.869 00:18:04 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[2].bdev_name' 00:13:49.869 00:18:04 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc2 = \M\a\l\l\o\c\2 ]] 00:13:49.869 00:18:04 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:49.869 00:18:04 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[3].ublk_device' 00:13:49.869 00:18:04 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb3 = \/\d\e\v\/\u\b\l\k\b\3 ]] 00:13:49.869 00:18:04 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[3].id' 00:13:49.869 00:18:04 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 3 = \3 ]] 00:13:49.869 00:18:04 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[3].queue_depth' 00:13:50.128 00:18:04 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:13:50.128 00:18:04 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[3].num_queues' 00:13:50.128 00:18:04 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:13:50.128 00:18:04 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[3].bdev_name' 00:13:50.128 00:18:04 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc3 = \M\a\l\l\o\c\3 ]] 00:13:50.128 00:18:04 ublk.test_create_multi_ublk -- ublk/ublk.sh@84 -- # [[ 1 = \1 ]] 00:13:50.128 00:18:04 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # seq 0 3 00:13:50.128 00:18:04 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:50.128 00:18:04 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 0 00:13:50.128 00:18:04 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:50.128 00:18:04 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:50.128 [2024-07-23 00:18:04.651442] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:13:50.128 [2024-07-23 00:18:04.701868] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:13:50.128 [2024-07-23 00:18:04.708030] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:13:50.128 [2024-07-23 00:18:04.719610] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:13:50.128 [2024-07-23 00:18:04.719994] ublk.c: 969:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:13:50.128 [2024-07-23 00:18:04.720013] ublk.c:1803:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:13:50.128 00:18:04 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:50.128 00:18:04 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:50.128 00:18:04 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 1 00:13:50.128 00:18:04 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:50.128 00:18:04 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:50.128 [2024-07-23 00:18:04.738361] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:13:50.128 [2024-07-23 00:18:04.773433] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:13:50.128 [2024-07-23 00:18:04.778613] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:13:50.128 [2024-07-23 00:18:04.788355] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:13:50.128 [2024-07-23 00:18:04.788736] ublk.c: 969:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:13:50.128 [2024-07-23 00:18:04.788754] ublk.c:1803:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:13:50.128 00:18:04 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:50.128 00:18:04 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:50.128 00:18:04 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 2 00:13:50.128 00:18:04 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:50.128 00:18:04 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:50.128 [2024-07-23 00:18:04.800413] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV 00:13:50.387 [2024-07-23 00:18:04.839368] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV completed 00:13:50.387 [2024-07-23 00:18:04.841229] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV 00:13:50.387 [2024-07-23 00:18:04.847326] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV completed 00:13:50.387 [2024-07-23 00:18:04.847693] ublk.c: 969:ublk_dev_list_unregister: *DEBUG*: ublk2: remove from tailq 00:13:50.387 [2024-07-23 00:18:04.847712] ublk.c:1803:ublk_free_dev: *NOTICE*: ublk dev 2 stopped 00:13:50.387 00:18:04 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:50.387 00:18:04 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:50.387 00:18:04 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 3 00:13:50.387 00:18:04 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:50.387 00:18:04 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:50.387 [2024-07-23 00:18:04.863428] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV 00:13:50.387 [2024-07-23 00:18:04.895913] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV completed 00:13:50.387 [2024-07-23 00:18:04.897173] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV 00:13:50.387 [2024-07-23 00:18:04.903322] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV completed 00:13:50.387 [2024-07-23 00:18:04.903706] ublk.c: 969:ublk_dev_list_unregister: *DEBUG*: ublk3: remove from tailq 00:13:50.388 [2024-07-23 00:18:04.903724] ublk.c:1803:ublk_free_dev: *NOTICE*: ublk dev 3 stopped 00:13:50.388 00:18:04 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:50.388 00:18:04 ublk.test_create_multi_ublk -- ublk/ublk.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 120 ublk_destroy_target 00:13:50.647 [2024-07-23 00:18:05.079423] ublk.c: 819:_ublk_fini: *DEBUG*: finish shutdown 00:13:50.647 [2024-07-23 00:18:05.082525] ublk.c: 750:_ublk_fini_done: *DEBUG*: 00:13:50.647 [2024-07-23 00:18:05.082566] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:13:50.647 00:18:05 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # seq 0 3 00:13:50.647 00:18:05 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:50.647 00:18:05 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc0 00:13:50.647 00:18:05 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:50.647 00:18:05 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:50.647 00:18:05 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:50.647 00:18:05 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:50.647 00:18:05 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc1 00:13:50.647 00:18:05 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:50.647 00:18:05 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:50.647 00:18:05 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:50.647 00:18:05 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:50.647 00:18:05 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc2 00:13:50.647 00:18:05 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:50.647 00:18:05 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:50.647 00:18:05 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:50.647 00:18:05 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:50.647 00:18:05 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc3 00:13:50.647 00:18:05 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:50.647 00:18:05 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:50.647 00:18:05 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:50.647 00:18:05 ublk.test_create_multi_ublk -- ublk/ublk.sh@96 -- # check_leftover_devices 00:13:50.647 00:18:05 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:13:50.647 00:18:05 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:50.647 00:18:05 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:50.907 00:18:05 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:50.907 00:18:05 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:13:50.907 00:18:05 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # jq length 00:13:50.907 00:18:05 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:13:50.907 00:18:05 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:13:50.907 00:18:05 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:50.907 00:18:05 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:50.907 00:18:05 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:50.907 00:18:05 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:13:50.907 00:18:05 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # jq length 00:13:50.907 00:18:05 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:13:50.907 00:13:50.907 real 0m2.109s 00:13:50.907 user 0m0.961s 00:13:50.907 sys 0m0.224s 00:13:50.907 00:18:05 ublk.test_create_multi_ublk -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:50.907 00:18:05 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:50.907 ************************************ 00:13:50.907 END TEST test_create_multi_ublk 00:13:50.907 ************************************ 00:13:50.907 00:18:05 ublk -- ublk/ublk.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:13:50.907 00:18:05 ublk -- ublk/ublk.sh@147 -- # cleanup 00:13:50.907 00:18:05 ublk -- ublk/ublk.sh@130 -- # killprocess 86212 00:13:50.907 00:18:05 ublk -- common/autotest_common.sh@946 -- # '[' -z 86212 ']' 00:13:50.907 00:18:05 ublk -- common/autotest_common.sh@950 -- # kill -0 86212 00:13:50.907 00:18:05 ublk -- common/autotest_common.sh@951 -- # uname 00:13:50.907 00:18:05 ublk -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:13:50.907 00:18:05 ublk -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 86212 00:13:50.907 00:18:05 ublk -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:13:50.907 00:18:05 ublk -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:13:50.907 killing process with pid 86212 00:13:50.907 00:18:05 ublk -- common/autotest_common.sh@964 -- # echo 'killing process with pid 86212' 00:13:50.907 00:18:05 ublk -- common/autotest_common.sh@965 -- # kill 86212 00:13:50.907 00:18:05 ublk -- common/autotest_common.sh@970 -- # wait 86212 00:13:51.166 [2024-07-23 00:18:05.657442] ublk.c: 819:_ublk_fini: *DEBUG*: finish shutdown 00:13:51.166 [2024-07-23 00:18:05.657555] ublk.c: 750:_ublk_fini_done: *DEBUG*: 00:13:51.425 00:13:51.425 real 0m18.035s 00:13:51.425 user 0m28.611s 00:13:51.425 sys 0m7.312s 00:13:51.425 00:18:05 ublk -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:51.425 ************************************ 00:13:51.425 END TEST ublk 00:13:51.425 ************************************ 00:13:51.425 00:18:05 ublk -- common/autotest_common.sh@10 -- # set +x 00:13:51.425 00:18:05 -- spdk/autotest.sh@252 -- # run_test ublk_recovery /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:13:51.425 00:18:05 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:13:51.425 00:18:05 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:51.425 00:18:05 -- common/autotest_common.sh@10 -- # set +x 00:13:51.425 ************************************ 00:13:51.425 START TEST ublk_recovery 00:13:51.425 ************************************ 00:13:51.425 00:18:05 ublk_recovery -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:13:51.425 * Looking for test storage... 00:13:51.425 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:13:51.425 00:18:06 ublk_recovery -- ublk/ublk_recovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:13:51.425 00:18:06 ublk_recovery -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:13:51.425 00:18:06 ublk_recovery -- lvol/common.sh@7 -- # MALLOC_BS=512 00:13:51.425 00:18:06 ublk_recovery -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:13:51.425 00:18:06 ublk_recovery -- lvol/common.sh@9 -- # AIO_BS=4096 00:13:51.425 00:18:06 ublk_recovery -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:13:51.425 00:18:06 ublk_recovery -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:13:51.425 00:18:06 ublk_recovery -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:13:51.425 00:18:06 ublk_recovery -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:13:51.425 00:18:06 ublk_recovery -- ublk/ublk_recovery.sh@11 -- # modprobe ublk_drv 00:13:51.684 00:18:06 ublk_recovery -- ublk/ublk_recovery.sh@19 -- # spdk_pid=86556 00:13:51.684 00:18:06 ublk_recovery -- ublk/ublk_recovery.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:13:51.684 00:18:06 ublk_recovery -- ublk/ublk_recovery.sh@20 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:51.684 00:18:06 ublk_recovery -- ublk/ublk_recovery.sh@21 -- # waitforlisten 86556 00:13:51.684 00:18:06 ublk_recovery -- common/autotest_common.sh@827 -- # '[' -z 86556 ']' 00:13:51.684 00:18:06 ublk_recovery -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:51.684 00:18:06 ublk_recovery -- common/autotest_common.sh@832 -- # local max_retries=100 00:13:51.684 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:51.684 00:18:06 ublk_recovery -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:51.684 00:18:06 ublk_recovery -- common/autotest_common.sh@836 -- # xtrace_disable 00:13:51.684 00:18:06 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:13:51.684 [2024-07-23 00:18:06.208927] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:13:51.684 [2024-07-23 00:18:06.209093] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86556 ] 00:13:51.684 [2024-07-23 00:18:06.360762] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:51.943 [2024-07-23 00:18:06.402628] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:51.943 [2024-07-23 00:18:06.402757] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:52.509 00:18:06 ublk_recovery -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:13:52.509 00:18:06 ublk_recovery -- common/autotest_common.sh@860 -- # return 0 00:13:52.509 00:18:06 ublk_recovery -- ublk/ublk_recovery.sh@23 -- # rpc_cmd ublk_create_target 00:13:52.509 00:18:06 ublk_recovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:52.509 00:18:06 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:13:52.509 [2024-07-23 00:18:06.994286] ublk.c: 537:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:13:52.509 [2024-07-23 00:18:06.995614] ublk.c: 742:ublk_create_target: *NOTICE*: UBLK target created successfully 00:13:52.509 00:18:06 ublk_recovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:52.509 00:18:06 ublk_recovery -- ublk/ublk_recovery.sh@24 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:13:52.509 00:18:06 ublk_recovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:52.509 00:18:06 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:13:52.509 malloc0 00:13:52.509 00:18:07 ublk_recovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:52.509 00:18:07 ublk_recovery -- ublk/ublk_recovery.sh@25 -- # rpc_cmd ublk_start_disk malloc0 1 -q 2 -d 128 00:13:52.509 00:18:07 ublk_recovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:52.509 00:18:07 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:13:52.509 [2024-07-23 00:18:07.042425] ublk.c:1908:ublk_start_disk: *DEBUG*: ublk1: bdev malloc0 num_queues 2 queue_depth 128 00:13:52.509 [2024-07-23 00:18:07.042584] ublk.c:1949:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 1 00:13:52.509 [2024-07-23 00:18:07.042599] ublk.c: 955:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:13:52.509 [2024-07-23 00:18:07.042608] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:13:52.509 [2024-07-23 00:18:07.051428] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:13:52.509 [2024-07-23 00:18:07.051449] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:13:52.509 [2024-07-23 00:18:07.058361] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:13:52.509 [2024-07-23 00:18:07.058503] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:13:52.509 [2024-07-23 00:18:07.081316] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:13:52.509 1 00:13:52.509 00:18:07 ublk_recovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:52.509 00:18:07 ublk_recovery -- ublk/ublk_recovery.sh@27 -- # sleep 1 00:13:53.456 00:18:08 ublk_recovery -- ublk/ublk_recovery.sh@31 -- # fio_proc=86584 00:13:53.456 00:18:08 ublk_recovery -- ublk/ublk_recovery.sh@33 -- # sleep 5 00:13:53.457 00:18:08 ublk_recovery -- ublk/ublk_recovery.sh@30 -- # taskset -c 2-3 fio --name=fio_test --filename=/dev/ublkb1 --numjobs=1 --iodepth=128 --ioengine=libaio --rw=randrw --direct=1 --time_based --runtime=60 00:13:53.715 fio_test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:53.715 fio-3.35 00:13:53.715 Starting 1 process 00:13:58.985 00:18:13 ublk_recovery -- ublk/ublk_recovery.sh@36 -- # kill -9 86556 00:13:58.985 00:18:13 ublk_recovery -- ublk/ublk_recovery.sh@38 -- # sleep 5 00:14:04.281 /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh: line 38: 86556 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x3 -L ublk 00:14:04.281 00:18:18 ublk_recovery -- ublk/ublk_recovery.sh@42 -- # spdk_pid=86696 00:14:04.281 00:18:18 ublk_recovery -- ublk/ublk_recovery.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:14:04.281 00:18:18 ublk_recovery -- ublk/ublk_recovery.sh@43 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:04.281 00:18:18 ublk_recovery -- ublk/ublk_recovery.sh@44 -- # waitforlisten 86696 00:14:04.281 00:18:18 ublk_recovery -- common/autotest_common.sh@827 -- # '[' -z 86696 ']' 00:14:04.281 00:18:18 ublk_recovery -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:04.281 00:18:18 ublk_recovery -- common/autotest_common.sh@832 -- # local max_retries=100 00:14:04.281 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:04.281 00:18:18 ublk_recovery -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:04.281 00:18:18 ublk_recovery -- common/autotest_common.sh@836 -- # xtrace_disable 00:14:04.281 00:18:18 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:14:04.281 [2024-07-23 00:18:18.202541] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:14:04.281 [2024-07-23 00:18:18.202675] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86696 ] 00:14:04.281 [2024-07-23 00:18:18.363968] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:14:04.281 [2024-07-23 00:18:18.407283] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:04.281 [2024-07-23 00:18:18.407411] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:04.543 00:18:19 ublk_recovery -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:14:04.543 00:18:19 ublk_recovery -- common/autotest_common.sh@860 -- # return 0 00:14:04.543 00:18:19 ublk_recovery -- ublk/ublk_recovery.sh@47 -- # rpc_cmd ublk_create_target 00:14:04.543 00:18:19 ublk_recovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:04.543 00:18:19 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:14:04.543 [2024-07-23 00:18:19.012287] ublk.c: 537:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:14:04.543 [2024-07-23 00:18:19.013646] ublk.c: 742:ublk_create_target: *NOTICE*: UBLK target created successfully 00:14:04.543 00:18:19 ublk_recovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:04.543 00:18:19 ublk_recovery -- ublk/ublk_recovery.sh@48 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:14:04.543 00:18:19 ublk_recovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:04.543 00:18:19 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:14:04.543 malloc0 00:14:04.543 00:18:19 ublk_recovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:04.543 00:18:19 ublk_recovery -- ublk/ublk_recovery.sh@49 -- # rpc_cmd ublk_recover_disk malloc0 1 00:14:04.543 00:18:19 ublk_recovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:04.543 00:18:19 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:14:04.543 [2024-07-23 00:18:19.052581] ublk.c:2095:ublk_start_disk_recovery: *NOTICE*: Recovering ublk 1 with bdev malloc0 00:14:04.543 [2024-07-23 00:18:19.052630] ublk.c: 955:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:14:04.543 [2024-07-23 00:18:19.052652] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:14:04.543 [2024-07-23 00:18:19.060314] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:14:04.543 [2024-07-23 00:18:19.060341] ublk.c:2024:ublk_ctrl_start_recovery: *DEBUG*: Recovering ublk 1, num queues 2, queue depth 128, flags 0xda 00:14:04.543 [2024-07-23 00:18:19.060445] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY 00:14:04.543 1 00:14:04.543 00:18:19 ublk_recovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:04.543 00:18:19 ublk_recovery -- ublk/ublk_recovery.sh@52 -- # wait 86584 00:14:04.543 [2024-07-23 00:18:19.068299] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY completed 00:14:04.543 [2024-07-23 00:18:19.072059] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY 00:14:04.543 [2024-07-23 00:18:19.076489] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY completed 00:14:04.543 [2024-07-23 00:18:19.076508] ublk.c: 378:ublk_ctrl_process_cqe: *NOTICE*: Ublk 1 recover done successfully 00:15:00.779 00:15:00.779 fio_test: (groupid=0, jobs=1): err= 0: pid=86593: Tue Jul 23 00:19:08 2024 00:15:00.779 read: IOPS=22.6k, BW=88.1MiB/s (92.4MB/s)(5289MiB/60004msec) 00:15:00.779 slat (nsec): min=1971, max=613995, avg=7284.22, stdev=2320.66 00:15:00.779 clat (usec): min=1343, max=5988.3k, avg=2795.64, stdev=41464.52 00:15:00.779 lat (usec): min=1353, max=5988.3k, avg=2802.92, stdev=41464.52 00:15:00.779 clat percentiles (usec): 00:15:00.779 | 1.00th=[ 1876], 5.00th=[ 2057], 10.00th=[ 2114], 20.00th=[ 2180], 00:15:00.779 | 30.00th=[ 2212], 40.00th=[ 2245], 50.00th=[ 2278], 60.00th=[ 2311], 00:15:00.779 | 70.00th=[ 2540], 80.00th=[ 2769], 90.00th=[ 2999], 95.00th=[ 3752], 00:15:00.779 | 99.00th=[ 5145], 99.50th=[ 5604], 99.90th=[ 7242], 99.95th=[ 8291], 00:15:00.779 | 99.99th=[12780] 00:15:00.779 bw ( KiB/s): min=26368, max=109680, per=100.00%, avg=99450.45, stdev=13891.06, samples=108 00:15:00.779 iops : min= 6592, max=27420, avg=24862.59, stdev=3472.76, samples=108 00:15:00.779 write: IOPS=22.5k, BW=88.0MiB/s (92.3MB/s)(5282MiB/60004msec); 0 zone resets 00:15:00.780 slat (nsec): min=1969, max=4004.1k, avg=7319.97, stdev=4137.06 00:15:00.780 clat (usec): min=1343, max=5988.6k, avg=2864.95, stdev=40851.89 00:15:00.780 lat (usec): min=1351, max=5988.6k, avg=2872.27, stdev=40851.89 00:15:00.780 clat percentiles (usec): 00:15:00.780 | 1.00th=[ 1876], 5.00th=[ 2057], 10.00th=[ 2180], 20.00th=[ 2278], 00:15:00.780 | 30.00th=[ 2311], 40.00th=[ 2343], 50.00th=[ 2376], 60.00th=[ 2409], 00:15:00.780 | 70.00th=[ 2573], 80.00th=[ 2900], 90.00th=[ 3097], 95.00th=[ 3752], 00:15:00.780 | 99.00th=[ 5145], 99.50th=[ 5669], 99.90th=[ 7308], 99.95th=[ 8455], 00:15:00.780 | 99.99th=[12911] 00:15:00.780 bw ( KiB/s): min=26664, max=109712, per=100.00%, avg=99311.47, stdev=13914.77, samples=108 00:15:00.780 iops : min= 6666, max=27428, avg=24827.85, stdev=3478.68, samples=108 00:15:00.780 lat (msec) : 2=3.23%, 4=92.83%, 10=3.92%, 20=0.02%, >=2000=0.01% 00:15:00.780 cpu : usr=12.24%, sys=32.40%, ctx=117702, majf=0, minf=13 00:15:00.780 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% 00:15:00.780 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:00.780 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:00.780 issued rwts: total=1353970,1352121,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:00.780 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:00.780 00:15:00.780 Run status group 0 (all jobs): 00:15:00.780 READ: bw=88.1MiB/s (92.4MB/s), 88.1MiB/s-88.1MiB/s (92.4MB/s-92.4MB/s), io=5289MiB (5546MB), run=60004-60004msec 00:15:00.780 WRITE: bw=88.0MiB/s (92.3MB/s), 88.0MiB/s-88.0MiB/s (92.3MB/s-92.3MB/s), io=5282MiB (5538MB), run=60004-60004msec 00:15:00.780 00:15:00.780 Disk stats (read/write): 00:15:00.780 ublkb1: ios=1351042/1349154, merge=0/0, ticks=3662909/3618659, in_queue=7281569, util=99.94% 00:15:00.780 00:19:08 ublk_recovery -- ublk/ublk_recovery.sh@55 -- # rpc_cmd ublk_stop_disk 1 00:15:00.780 00:19:08 ublk_recovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:00.780 00:19:08 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:15:00.780 [2024-07-23 00:19:08.366110] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:15:00.780 [2024-07-23 00:19:08.408326] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:15:00.780 [2024-07-23 00:19:08.408587] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:15:00.780 [2024-07-23 00:19:08.416309] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:15:00.780 [2024-07-23 00:19:08.416425] ublk.c: 969:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:15:00.780 [2024-07-23 00:19:08.416436] ublk.c:1803:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:15:00.780 00:19:08 ublk_recovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:00.780 00:19:08 ublk_recovery -- ublk/ublk_recovery.sh@56 -- # rpc_cmd ublk_destroy_target 00:15:00.780 00:19:08 ublk_recovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:00.780 00:19:08 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:15:00.780 [2024-07-23 00:19:08.432398] ublk.c: 819:_ublk_fini: *DEBUG*: finish shutdown 00:15:00.780 [2024-07-23 00:19:08.434353] ublk.c: 750:_ublk_fini_done: *DEBUG*: 00:15:00.780 [2024-07-23 00:19:08.434397] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:15:00.780 00:19:08 ublk_recovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:00.780 00:19:08 ublk_recovery -- ublk/ublk_recovery.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:15:00.780 00:19:08 ublk_recovery -- ublk/ublk_recovery.sh@59 -- # cleanup 00:15:00.780 00:19:08 ublk_recovery -- ublk/ublk_recovery.sh@14 -- # killprocess 86696 00:15:00.780 00:19:08 ublk_recovery -- common/autotest_common.sh@946 -- # '[' -z 86696 ']' 00:15:00.780 00:19:08 ublk_recovery -- common/autotest_common.sh@950 -- # kill -0 86696 00:15:00.780 00:19:08 ublk_recovery -- common/autotest_common.sh@951 -- # uname 00:15:00.780 00:19:08 ublk_recovery -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:15:00.780 00:19:08 ublk_recovery -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 86696 00:15:00.780 00:19:08 ublk_recovery -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:15:00.780 00:19:08 ublk_recovery -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:15:00.780 killing process with pid 86696 00:15:00.780 00:19:08 ublk_recovery -- common/autotest_common.sh@964 -- # echo 'killing process with pid 86696' 00:15:00.780 00:19:08 ublk_recovery -- common/autotest_common.sh@965 -- # kill 86696 00:15:00.780 00:19:08 ublk_recovery -- common/autotest_common.sh@970 -- # wait 86696 00:15:00.780 [2024-07-23 00:19:08.623169] ublk.c: 819:_ublk_fini: *DEBUG*: finish shutdown 00:15:00.780 [2024-07-23 00:19:08.623248] ublk.c: 750:_ublk_fini_done: *DEBUG*: 00:15:00.780 00:15:00.780 real 1m2.910s 00:15:00.780 user 1m43.587s 00:15:00.780 sys 0m38.497s 00:15:00.780 00:19:08 ublk_recovery -- common/autotest_common.sh@1122 -- # xtrace_disable 00:15:00.780 ************************************ 00:15:00.780 END TEST ublk_recovery 00:15:00.780 ************************************ 00:15:00.780 00:19:08 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:15:00.780 00:19:08 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:15:00.780 00:19:08 -- spdk/autotest.sh@260 -- # timing_exit lib 00:15:00.780 00:19:08 -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:00.780 00:19:08 -- common/autotest_common.sh@10 -- # set +x 00:15:00.780 00:19:08 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:15:00.780 00:19:08 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:15:00.780 00:19:08 -- spdk/autotest.sh@279 -- # '[' 0 -eq 1 ']' 00:15:00.780 00:19:08 -- spdk/autotest.sh@308 -- # '[' 0 -eq 1 ']' 00:15:00.780 00:19:08 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:15:00.780 00:19:08 -- spdk/autotest.sh@316 -- # '[' 0 -eq 1 ']' 00:15:00.780 00:19:08 -- spdk/autotest.sh@321 -- # '[' 0 -eq 1 ']' 00:15:00.780 00:19:08 -- spdk/autotest.sh@330 -- # '[' 0 -eq 1 ']' 00:15:00.780 00:19:08 -- spdk/autotest.sh@335 -- # '[' 0 -eq 1 ']' 00:15:00.780 00:19:08 -- spdk/autotest.sh@339 -- # '[' 1 -eq 1 ']' 00:15:00.780 00:19:08 -- spdk/autotest.sh@340 -- # run_test ftl /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:15:00.780 00:19:08 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:15:00.780 00:19:08 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:15:00.780 00:19:08 -- common/autotest_common.sh@10 -- # set +x 00:15:00.780 ************************************ 00:15:00.780 START TEST ftl 00:15:00.780 ************************************ 00:15:00.780 00:19:09 ftl -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:15:00.780 * Looking for test storage... 00:15:00.780 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:15:00.780 00:19:09 ftl -- ftl/ftl.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:15:00.780 00:19:09 ftl -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:15:00.780 00:19:09 ftl -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:15:00.780 00:19:09 ftl -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:15:00.780 00:19:09 ftl -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:15:00.780 00:19:09 ftl -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:15:00.780 00:19:09 ftl -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:00.780 00:19:09 ftl -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:15:00.780 00:19:09 ftl -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:15:00.780 00:19:09 ftl -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:00.780 00:19:09 ftl -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:00.780 00:19:09 ftl -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:15:00.780 00:19:09 ftl -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:15:00.780 00:19:09 ftl -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:15:00.780 00:19:09 ftl -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:15:00.780 00:19:09 ftl -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:15:00.780 00:19:09 ftl -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:15:00.780 00:19:09 ftl -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:00.780 00:19:09 ftl -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:00.780 00:19:09 ftl -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:15:00.780 00:19:09 ftl -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:15:00.780 00:19:09 ftl -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:15:00.780 00:19:09 ftl -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:15:00.780 00:19:09 ftl -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:15:00.780 00:19:09 ftl -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:15:00.780 00:19:09 ftl -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:15:00.780 00:19:09 ftl -- ftl/common.sh@23 -- # spdk_ini_pid= 00:15:00.780 00:19:09 ftl -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:00.780 00:19:09 ftl -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:00.780 00:19:09 ftl -- ftl/ftl.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:00.780 00:19:09 ftl -- ftl/ftl.sh@31 -- # trap at_ftl_exit SIGINT SIGTERM EXIT 00:15:00.780 00:19:09 ftl -- ftl/ftl.sh@34 -- # PCI_ALLOWED= 00:15:00.780 00:19:09 ftl -- ftl/ftl.sh@34 -- # PCI_BLOCKED= 00:15:00.780 00:19:09 ftl -- ftl/ftl.sh@34 -- # DRIVER_OVERRIDE= 00:15:00.780 00:19:09 ftl -- ftl/ftl.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:15:00.780 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:15:00.780 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:15:00.780 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:15:00.780 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:15:00.780 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:15:00.780 00:19:09 ftl -- ftl/ftl.sh@36 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:15:00.780 00:19:09 ftl -- ftl/ftl.sh@37 -- # spdk_tgt_pid=87480 00:15:00.780 00:19:09 ftl -- ftl/ftl.sh@38 -- # waitforlisten 87480 00:15:00.780 00:19:09 ftl -- common/autotest_common.sh@827 -- # '[' -z 87480 ']' 00:15:00.780 00:19:09 ftl -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:00.780 00:19:09 ftl -- common/autotest_common.sh@832 -- # local max_retries=100 00:15:00.781 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:00.781 00:19:09 ftl -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:00.781 00:19:09 ftl -- common/autotest_common.sh@836 -- # xtrace_disable 00:15:00.781 00:19:09 ftl -- common/autotest_common.sh@10 -- # set +x 00:15:00.781 [2024-07-23 00:19:10.051023] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:15:00.781 [2024-07-23 00:19:10.051159] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87480 ] 00:15:00.781 [2024-07-23 00:19:10.203178] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:00.781 [2024-07-23 00:19:10.246839] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:00.781 00:19:10 ftl -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:15:00.781 00:19:10 ftl -- common/autotest_common.sh@860 -- # return 0 00:15:00.781 00:19:10 ftl -- ftl/ftl.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_set_options -d 00:15:00.781 00:19:11 ftl -- ftl/ftl.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:15:00.781 00:19:11 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config -j /dev/fd/62 00:15:00.781 00:19:11 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:15:00.781 00:19:11 ftl -- ftl/ftl.sh@46 -- # cache_size=1310720 00:15:00.781 00:19:11 ftl -- ftl/ftl.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:15:00.781 00:19:11 ftl -- ftl/ftl.sh@47 -- # jq -r '.[] | select(.md_size==64 and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:15:00.781 00:19:11 ftl -- ftl/ftl.sh@47 -- # cache_disks=0000:00:10.0 00:15:00.781 00:19:11 ftl -- ftl/ftl.sh@48 -- # for disk in $cache_disks 00:15:00.781 00:19:11 ftl -- ftl/ftl.sh@49 -- # nv_cache=0000:00:10.0 00:15:00.781 00:19:11 ftl -- ftl/ftl.sh@50 -- # break 00:15:00.781 00:19:11 ftl -- ftl/ftl.sh@53 -- # '[' -z 0000:00:10.0 ']' 00:15:00.781 00:19:11 ftl -- ftl/ftl.sh@59 -- # base_size=1310720 00:15:00.781 00:19:11 ftl -- ftl/ftl.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:15:00.781 00:19:11 ftl -- ftl/ftl.sh@60 -- # jq -r '.[] | select(.driver_specific.nvme[0].pci_address!="0000:00:10.0" and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:15:00.781 00:19:12 ftl -- ftl/ftl.sh@60 -- # base_disks=0000:00:11.0 00:15:00.781 00:19:12 ftl -- ftl/ftl.sh@61 -- # for disk in $base_disks 00:15:00.781 00:19:12 ftl -- ftl/ftl.sh@62 -- # device=0000:00:11.0 00:15:00.781 00:19:12 ftl -- ftl/ftl.sh@63 -- # break 00:15:00.781 00:19:12 ftl -- ftl/ftl.sh@66 -- # killprocess 87480 00:15:00.781 00:19:12 ftl -- common/autotest_common.sh@946 -- # '[' -z 87480 ']' 00:15:00.781 00:19:12 ftl -- common/autotest_common.sh@950 -- # kill -0 87480 00:15:00.781 00:19:12 ftl -- common/autotest_common.sh@951 -- # uname 00:15:00.781 00:19:12 ftl -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:15:00.781 00:19:12 ftl -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 87480 00:15:00.781 00:19:12 ftl -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:15:00.781 00:19:12 ftl -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:15:00.781 killing process with pid 87480 00:15:00.781 00:19:12 ftl -- common/autotest_common.sh@964 -- # echo 'killing process with pid 87480' 00:15:00.781 00:19:12 ftl -- common/autotest_common.sh@965 -- # kill 87480 00:15:00.781 00:19:12 ftl -- common/autotest_common.sh@970 -- # wait 87480 00:15:00.781 00:19:12 ftl -- ftl/ftl.sh@68 -- # '[' -z 0000:00:11.0 ']' 00:15:00.781 00:19:12 ftl -- ftl/ftl.sh@73 -- # run_test ftl_fio_basic /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:15:00.781 00:19:12 ftl -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:15:00.781 00:19:12 ftl -- common/autotest_common.sh@1103 -- # xtrace_disable 00:15:00.781 00:19:12 ftl -- common/autotest_common.sh@10 -- # set +x 00:15:00.781 ************************************ 00:15:00.781 START TEST ftl_fio_basic 00:15:00.781 ************************************ 00:15:00.781 00:19:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:15:00.781 * Looking for test storage... 00:15:00.781 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:15:00.781 00:19:12 ftl.ftl_fio_basic -- ftl/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:15:00.781 00:19:12 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 00:15:00.781 00:19:12 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:15:00.781 00:19:12 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:15:00.781 00:19:12 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:15:00.781 00:19:12 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:15:00.781 00:19:12 ftl.ftl_fio_basic -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:00.781 00:19:12 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:15:00.781 00:19:12 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:15:00.781 00:19:12 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:00.781 00:19:12 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:00.781 00:19:12 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:15:00.781 00:19:12 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:15:00.781 00:19:12 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:15:00.781 00:19:12 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:15:00.781 00:19:12 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:15:00.781 00:19:12 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:15:00.781 00:19:12 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:00.781 00:19:12 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:00.781 00:19:12 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:15:00.781 00:19:12 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:15:00.781 00:19:12 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:15:00.781 00:19:12 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:15:00.781 00:19:12 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:15:00.781 00:19:12 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:15:00.781 00:19:12 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:15:00.781 00:19:12 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # spdk_ini_pid= 00:15:00.781 00:19:12 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:00.781 00:19:12 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:00.781 00:19:12 ftl.ftl_fio_basic -- ftl/fio.sh@11 -- # declare -A suite 00:15:00.781 00:19:12 ftl.ftl_fio_basic -- ftl/fio.sh@12 -- # suite['basic']='randw-verify randw-verify-j2 randw-verify-depth128' 00:15:00.781 00:19:12 ftl.ftl_fio_basic -- ftl/fio.sh@13 -- # suite['extended']='drive-prep randw-verify-qd128-ext randw-verify-qd2048-ext randw randr randrw unmap' 00:15:00.781 00:19:12 ftl.ftl_fio_basic -- ftl/fio.sh@14 -- # suite['nightly']='drive-prep randw-verify-qd256-nght randw-verify-qd256-nght randw-verify-qd256-nght' 00:15:00.781 00:19:12 ftl.ftl_fio_basic -- ftl/fio.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:00.781 00:19:12 ftl.ftl_fio_basic -- ftl/fio.sh@23 -- # device=0000:00:11.0 00:15:00.781 00:19:12 ftl.ftl_fio_basic -- ftl/fio.sh@24 -- # cache_device=0000:00:10.0 00:15:00.781 00:19:12 ftl.ftl_fio_basic -- ftl/fio.sh@25 -- # tests='randw-verify randw-verify-j2 randw-verify-depth128' 00:15:00.781 00:19:12 ftl.ftl_fio_basic -- ftl/fio.sh@26 -- # uuid= 00:15:00.781 00:19:12 ftl.ftl_fio_basic -- ftl/fio.sh@27 -- # timeout=240 00:15:00.781 00:19:12 ftl.ftl_fio_basic -- ftl/fio.sh@29 -- # [[ y != y ]] 00:15:00.781 00:19:12 ftl.ftl_fio_basic -- ftl/fio.sh@34 -- # '[' -z 'randw-verify randw-verify-j2 randw-verify-depth128' ']' 00:15:00.781 00:19:12 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # export FTL_BDEV_NAME=ftl0 00:15:00.781 00:19:12 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # FTL_BDEV_NAME=ftl0 00:15:00.781 00:19:12 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:15:00.781 00:19:12 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:15:00.781 00:19:12 ftl.ftl_fio_basic -- ftl/fio.sh@42 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:15:00.781 00:19:12 ftl.ftl_fio_basic -- ftl/fio.sh@45 -- # svcpid=87583 00:15:00.781 00:19:12 ftl.ftl_fio_basic -- ftl/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 7 00:15:00.781 00:19:12 ftl.ftl_fio_basic -- ftl/fio.sh@46 -- # waitforlisten 87583 00:15:00.781 00:19:12 ftl.ftl_fio_basic -- common/autotest_common.sh@827 -- # '[' -z 87583 ']' 00:15:00.781 00:19:12 ftl.ftl_fio_basic -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:00.781 00:19:12 ftl.ftl_fio_basic -- common/autotest_common.sh@832 -- # local max_retries=100 00:15:00.781 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:00.781 00:19:12 ftl.ftl_fio_basic -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:00.781 00:19:12 ftl.ftl_fio_basic -- common/autotest_common.sh@836 -- # xtrace_disable 00:15:00.781 00:19:12 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:15:00.781 [2024-07-23 00:19:12.860841] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:15:00.781 [2024-07-23 00:19:12.860967] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87583 ] 00:15:00.781 [2024-07-23 00:19:13.003463] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:00.781 [2024-07-23 00:19:13.048432] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:00.781 [2024-07-23 00:19:13.048619] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:00.781 [2024-07-23 00:19:13.048515] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:00.781 00:19:13 ftl.ftl_fio_basic -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:15:00.781 00:19:13 ftl.ftl_fio_basic -- common/autotest_common.sh@860 -- # return 0 00:15:00.781 00:19:13 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:15:00.781 00:19:13 ftl.ftl_fio_basic -- ftl/common.sh@54 -- # local name=nvme0 00:15:00.781 00:19:13 ftl.ftl_fio_basic -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:15:00.782 00:19:13 ftl.ftl_fio_basic -- ftl/common.sh@56 -- # local size=103424 00:15:00.782 00:19:13 ftl.ftl_fio_basic -- ftl/common.sh@59 -- # local base_bdev 00:15:00.782 00:19:13 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:15:00.782 00:19:13 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:15:00.782 00:19:13 ftl.ftl_fio_basic -- ftl/common.sh@62 -- # local base_size 00:15:00.782 00:19:13 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:15:00.782 00:19:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1374 -- # local bdev_name=nvme0n1 00:15:00.782 00:19:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1375 -- # local bdev_info 00:15:00.782 00:19:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1376 -- # local bs 00:15:00.782 00:19:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1377 -- # local nb 00:15:00.782 00:19:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1378 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:15:00.782 00:19:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1378 -- # bdev_info='[ 00:15:00.782 { 00:15:00.782 "name": "nvme0n1", 00:15:00.782 "aliases": [ 00:15:00.782 "e7b1e6ea-8bed-4665-94ea-94b871fe41cc" 00:15:00.782 ], 00:15:00.782 "product_name": "NVMe disk", 00:15:00.782 "block_size": 4096, 00:15:00.782 "num_blocks": 1310720, 00:15:00.782 "uuid": "e7b1e6ea-8bed-4665-94ea-94b871fe41cc", 00:15:00.782 "assigned_rate_limits": { 00:15:00.782 "rw_ios_per_sec": 0, 00:15:00.782 "rw_mbytes_per_sec": 0, 00:15:00.782 "r_mbytes_per_sec": 0, 00:15:00.782 "w_mbytes_per_sec": 0 00:15:00.782 }, 00:15:00.782 "claimed": false, 00:15:00.782 "zoned": false, 00:15:00.782 "supported_io_types": { 00:15:00.782 "read": true, 00:15:00.782 "write": true, 00:15:00.782 "unmap": true, 00:15:00.782 "write_zeroes": true, 00:15:00.782 "flush": true, 00:15:00.782 "reset": true, 00:15:00.782 "compare": true, 00:15:00.782 "compare_and_write": false, 00:15:00.782 "abort": true, 00:15:00.782 "nvme_admin": true, 00:15:00.782 "nvme_io": true 00:15:00.782 }, 00:15:00.782 "driver_specific": { 00:15:00.782 "nvme": [ 00:15:00.782 { 00:15:00.782 "pci_address": "0000:00:11.0", 00:15:00.782 "trid": { 00:15:00.782 "trtype": "PCIe", 00:15:00.782 "traddr": "0000:00:11.0" 00:15:00.782 }, 00:15:00.782 "ctrlr_data": { 00:15:00.782 "cntlid": 0, 00:15:00.782 "vendor_id": "0x1b36", 00:15:00.782 "model_number": "QEMU NVMe Ctrl", 00:15:00.782 "serial_number": "12341", 00:15:00.782 "firmware_revision": "8.0.0", 00:15:00.782 "subnqn": "nqn.2019-08.org.qemu:12341", 00:15:00.782 "oacs": { 00:15:00.782 "security": 0, 00:15:00.782 "format": 1, 00:15:00.782 "firmware": 0, 00:15:00.782 "ns_manage": 1 00:15:00.782 }, 00:15:00.782 "multi_ctrlr": false, 00:15:00.782 "ana_reporting": false 00:15:00.782 }, 00:15:00.782 "vs": { 00:15:00.782 "nvme_version": "1.4" 00:15:00.782 }, 00:15:00.782 "ns_data": { 00:15:00.782 "id": 1, 00:15:00.782 "can_share": false 00:15:00.782 } 00:15:00.782 } 00:15:00.782 ], 00:15:00.782 "mp_policy": "active_passive" 00:15:00.782 } 00:15:00.782 } 00:15:00.782 ]' 00:15:00.782 00:19:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1379 -- # jq '.[] .block_size' 00:15:00.782 00:19:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1379 -- # bs=4096 00:15:00.782 00:19:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # jq '.[] .num_blocks' 00:15:00.782 00:19:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # nb=1310720 00:15:00.782 00:19:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # bdev_size=5120 00:15:00.782 00:19:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # echo 5120 00:15:00.782 00:19:14 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # base_size=5120 00:15:00.782 00:19:14 ftl.ftl_fio_basic -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:15:00.782 00:19:14 ftl.ftl_fio_basic -- ftl/common.sh@67 -- # clear_lvols 00:15:00.782 00:19:14 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:15:00.782 00:19:14 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:15:00.782 00:19:14 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # stores= 00:15:00.782 00:19:14 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:15:00.782 00:19:14 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # lvs=ea553ff1-d1c8-447b-ae65-acf19ec88f27 00:15:00.782 00:19:14 ftl.ftl_fio_basic -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u ea553ff1-d1c8-447b-ae65-acf19ec88f27 00:15:00.782 00:19:14 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # split_bdev=06af267e-c09b-4cae-9a69-c432eecb8225 00:15:00.782 00:19:14 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # create_nv_cache_bdev nvc0 0000:00:10.0 06af267e-c09b-4cae-9a69-c432eecb8225 00:15:00.782 00:19:14 ftl.ftl_fio_basic -- ftl/common.sh@35 -- # local name=nvc0 00:15:00.782 00:19:14 ftl.ftl_fio_basic -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:15:00.782 00:19:14 ftl.ftl_fio_basic -- ftl/common.sh@37 -- # local base_bdev=06af267e-c09b-4cae-9a69-c432eecb8225 00:15:00.782 00:19:14 ftl.ftl_fio_basic -- ftl/common.sh@38 -- # local cache_size= 00:15:00.782 00:19:14 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # get_bdev_size 06af267e-c09b-4cae-9a69-c432eecb8225 00:15:00.782 00:19:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1374 -- # local bdev_name=06af267e-c09b-4cae-9a69-c432eecb8225 00:15:00.782 00:19:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1375 -- # local bdev_info 00:15:00.782 00:19:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1376 -- # local bs 00:15:00.782 00:19:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1377 -- # local nb 00:15:00.782 00:19:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1378 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 06af267e-c09b-4cae-9a69-c432eecb8225 00:15:00.782 00:19:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1378 -- # bdev_info='[ 00:15:00.782 { 00:15:00.782 "name": "06af267e-c09b-4cae-9a69-c432eecb8225", 00:15:00.782 "aliases": [ 00:15:00.782 "lvs/nvme0n1p0" 00:15:00.782 ], 00:15:00.782 "product_name": "Logical Volume", 00:15:00.782 "block_size": 4096, 00:15:00.782 "num_blocks": 26476544, 00:15:00.782 "uuid": "06af267e-c09b-4cae-9a69-c432eecb8225", 00:15:00.782 "assigned_rate_limits": { 00:15:00.782 "rw_ios_per_sec": 0, 00:15:00.782 "rw_mbytes_per_sec": 0, 00:15:00.782 "r_mbytes_per_sec": 0, 00:15:00.782 "w_mbytes_per_sec": 0 00:15:00.782 }, 00:15:00.782 "claimed": false, 00:15:00.782 "zoned": false, 00:15:00.782 "supported_io_types": { 00:15:00.782 "read": true, 00:15:00.782 "write": true, 00:15:00.782 "unmap": true, 00:15:00.782 "write_zeroes": true, 00:15:00.782 "flush": false, 00:15:00.782 "reset": true, 00:15:00.782 "compare": false, 00:15:00.782 "compare_and_write": false, 00:15:00.782 "abort": false, 00:15:00.782 "nvme_admin": false, 00:15:00.782 "nvme_io": false 00:15:00.782 }, 00:15:00.782 "driver_specific": { 00:15:00.782 "lvol": { 00:15:00.782 "lvol_store_uuid": "ea553ff1-d1c8-447b-ae65-acf19ec88f27", 00:15:00.782 "base_bdev": "nvme0n1", 00:15:00.782 "thin_provision": true, 00:15:00.782 "num_allocated_clusters": 0, 00:15:00.782 "snapshot": false, 00:15:00.782 "clone": false, 00:15:00.782 "esnap_clone": false 00:15:00.782 } 00:15:00.782 } 00:15:00.782 } 00:15:00.782 ]' 00:15:00.782 00:19:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1379 -- # jq '.[] .block_size' 00:15:00.782 00:19:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1379 -- # bs=4096 00:15:00.782 00:19:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # jq '.[] .num_blocks' 00:15:00.782 00:19:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # nb=26476544 00:15:00.782 00:19:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # bdev_size=103424 00:15:00.782 00:19:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # echo 103424 00:15:00.782 00:19:15 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # local base_size=5171 00:15:00.782 00:19:15 ftl.ftl_fio_basic -- ftl/common.sh@44 -- # local nvc_bdev 00:15:00.782 00:19:15 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:15:00.782 00:19:15 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:15:00.782 00:19:15 ftl.ftl_fio_basic -- ftl/common.sh@47 -- # [[ -z '' ]] 00:15:00.782 00:19:15 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # get_bdev_size 06af267e-c09b-4cae-9a69-c432eecb8225 00:15:00.782 00:19:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1374 -- # local bdev_name=06af267e-c09b-4cae-9a69-c432eecb8225 00:15:00.782 00:19:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1375 -- # local bdev_info 00:15:00.782 00:19:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1376 -- # local bs 00:15:00.782 00:19:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1377 -- # local nb 00:15:00.782 00:19:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1378 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 06af267e-c09b-4cae-9a69-c432eecb8225 00:15:01.041 00:19:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1378 -- # bdev_info='[ 00:15:01.041 { 00:15:01.041 "name": "06af267e-c09b-4cae-9a69-c432eecb8225", 00:15:01.041 "aliases": [ 00:15:01.041 "lvs/nvme0n1p0" 00:15:01.041 ], 00:15:01.041 "product_name": "Logical Volume", 00:15:01.041 "block_size": 4096, 00:15:01.041 "num_blocks": 26476544, 00:15:01.041 "uuid": "06af267e-c09b-4cae-9a69-c432eecb8225", 00:15:01.041 "assigned_rate_limits": { 00:15:01.041 "rw_ios_per_sec": 0, 00:15:01.041 "rw_mbytes_per_sec": 0, 00:15:01.041 "r_mbytes_per_sec": 0, 00:15:01.041 "w_mbytes_per_sec": 0 00:15:01.041 }, 00:15:01.041 "claimed": false, 00:15:01.041 "zoned": false, 00:15:01.041 "supported_io_types": { 00:15:01.041 "read": true, 00:15:01.041 "write": true, 00:15:01.041 "unmap": true, 00:15:01.041 "write_zeroes": true, 00:15:01.041 "flush": false, 00:15:01.041 "reset": true, 00:15:01.041 "compare": false, 00:15:01.041 "compare_and_write": false, 00:15:01.041 "abort": false, 00:15:01.041 "nvme_admin": false, 00:15:01.041 "nvme_io": false 00:15:01.041 }, 00:15:01.041 "driver_specific": { 00:15:01.041 "lvol": { 00:15:01.041 "lvol_store_uuid": "ea553ff1-d1c8-447b-ae65-acf19ec88f27", 00:15:01.041 "base_bdev": "nvme0n1", 00:15:01.041 "thin_provision": true, 00:15:01.041 "num_allocated_clusters": 0, 00:15:01.041 "snapshot": false, 00:15:01.041 "clone": false, 00:15:01.041 "esnap_clone": false 00:15:01.041 } 00:15:01.041 } 00:15:01.041 } 00:15:01.041 ]' 00:15:01.041 00:19:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1379 -- # jq '.[] .block_size' 00:15:01.041 00:19:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1379 -- # bs=4096 00:15:01.041 00:19:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # jq '.[] .num_blocks' 00:15:01.041 00:19:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # nb=26476544 00:15:01.041 00:19:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # bdev_size=103424 00:15:01.041 00:19:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # echo 103424 00:15:01.041 00:19:15 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # cache_size=5171 00:15:01.041 00:19:15 ftl.ftl_fio_basic -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:15:01.300 00:19:15 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # nv_cache=nvc0n1p0 00:15:01.300 00:19:15 ftl.ftl_fio_basic -- ftl/fio.sh@51 -- # l2p_percentage=60 00:15:01.300 00:19:15 ftl.ftl_fio_basic -- ftl/fio.sh@52 -- # '[' -eq 1 ']' 00:15:01.300 /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh: line 52: [: -eq: unary operator expected 00:15:01.300 00:19:15 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # get_bdev_size 06af267e-c09b-4cae-9a69-c432eecb8225 00:15:01.300 00:19:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1374 -- # local bdev_name=06af267e-c09b-4cae-9a69-c432eecb8225 00:15:01.300 00:19:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1375 -- # local bdev_info 00:15:01.300 00:19:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1376 -- # local bs 00:15:01.300 00:19:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1377 -- # local nb 00:15:01.300 00:19:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1378 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 06af267e-c09b-4cae-9a69-c432eecb8225 00:15:01.300 00:19:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1378 -- # bdev_info='[ 00:15:01.300 { 00:15:01.300 "name": "06af267e-c09b-4cae-9a69-c432eecb8225", 00:15:01.300 "aliases": [ 00:15:01.300 "lvs/nvme0n1p0" 00:15:01.300 ], 00:15:01.300 "product_name": "Logical Volume", 00:15:01.300 "block_size": 4096, 00:15:01.300 "num_blocks": 26476544, 00:15:01.300 "uuid": "06af267e-c09b-4cae-9a69-c432eecb8225", 00:15:01.300 "assigned_rate_limits": { 00:15:01.300 "rw_ios_per_sec": 0, 00:15:01.300 "rw_mbytes_per_sec": 0, 00:15:01.300 "r_mbytes_per_sec": 0, 00:15:01.300 "w_mbytes_per_sec": 0 00:15:01.300 }, 00:15:01.300 "claimed": false, 00:15:01.300 "zoned": false, 00:15:01.300 "supported_io_types": { 00:15:01.300 "read": true, 00:15:01.300 "write": true, 00:15:01.300 "unmap": true, 00:15:01.300 "write_zeroes": true, 00:15:01.300 "flush": false, 00:15:01.300 "reset": true, 00:15:01.300 "compare": false, 00:15:01.300 "compare_and_write": false, 00:15:01.300 "abort": false, 00:15:01.300 "nvme_admin": false, 00:15:01.300 "nvme_io": false 00:15:01.300 }, 00:15:01.300 "driver_specific": { 00:15:01.300 "lvol": { 00:15:01.300 "lvol_store_uuid": "ea553ff1-d1c8-447b-ae65-acf19ec88f27", 00:15:01.300 "base_bdev": "nvme0n1", 00:15:01.300 "thin_provision": true, 00:15:01.300 "num_allocated_clusters": 0, 00:15:01.300 "snapshot": false, 00:15:01.300 "clone": false, 00:15:01.300 "esnap_clone": false 00:15:01.300 } 00:15:01.300 } 00:15:01.300 } 00:15:01.300 ]' 00:15:01.300 00:19:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1379 -- # jq '.[] .block_size' 00:15:01.559 00:19:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1379 -- # bs=4096 00:15:01.559 00:19:16 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # jq '.[] .num_blocks' 00:15:01.559 00:19:16 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # nb=26476544 00:15:01.559 00:19:16 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # bdev_size=103424 00:15:01.560 00:19:16 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # echo 103424 00:15:01.560 00:19:16 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # l2p_dram_size_mb=60 00:15:01.560 00:19:16 ftl.ftl_fio_basic -- ftl/fio.sh@58 -- # '[' -z '' ']' 00:15:01.560 00:19:16 ftl.ftl_fio_basic -- ftl/fio.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 06af267e-c09b-4cae-9a69-c432eecb8225 -c nvc0n1p0 --l2p_dram_limit 60 00:15:01.560 [2024-07-23 00:19:16.209442] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:01.560 [2024-07-23 00:19:16.209499] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:15:01.560 [2024-07-23 00:19:16.209520] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:15:01.560 [2024-07-23 00:19:16.209531] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:01.560 [2024-07-23 00:19:16.209615] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:01.560 [2024-07-23 00:19:16.209630] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:15:01.560 [2024-07-23 00:19:16.209645] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:15:01.560 [2024-07-23 00:19:16.209655] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:01.560 [2024-07-23 00:19:16.209716] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:15:01.560 [2024-07-23 00:19:16.210031] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:15:01.560 [2024-07-23 00:19:16.210055] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:01.560 [2024-07-23 00:19:16.210066] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:15:01.560 [2024-07-23 00:19:16.210079] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.360 ms 00:15:01.560 [2024-07-23 00:19:16.210101] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:01.560 [2024-07-23 00:19:16.210218] mngt/ftl_mngt_md.c: 568:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID b9316d50-f8ef-44c6-8603-0870039239e4 00:15:01.560 [2024-07-23 00:19:16.211744] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:01.560 [2024-07-23 00:19:16.211778] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:15:01.560 [2024-07-23 00:19:16.211791] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:15:01.560 [2024-07-23 00:19:16.211806] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:01.560 [2024-07-23 00:19:16.219378] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:01.560 [2024-07-23 00:19:16.219415] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:15:01.560 [2024-07-23 00:19:16.219450] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.498 ms 00:15:01.560 [2024-07-23 00:19:16.219475] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:01.560 [2024-07-23 00:19:16.219601] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:01.560 [2024-07-23 00:19:16.219619] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:15:01.560 [2024-07-23 00:19:16.219630] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.086 ms 00:15:01.560 [2024-07-23 00:19:16.219645] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:01.560 [2024-07-23 00:19:16.219728] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:01.560 [2024-07-23 00:19:16.219742] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:15:01.560 [2024-07-23 00:19:16.219753] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:15:01.560 [2024-07-23 00:19:16.219765] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:01.560 [2024-07-23 00:19:16.219797] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:15:01.560 [2024-07-23 00:19:16.221616] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:01.560 [2024-07-23 00:19:16.221646] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:15:01.560 [2024-07-23 00:19:16.221660] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.825 ms 00:15:01.560 [2024-07-23 00:19:16.221671] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:01.560 [2024-07-23 00:19:16.221727] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:01.560 [2024-07-23 00:19:16.221742] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:15:01.560 [2024-07-23 00:19:16.221770] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:15:01.560 [2024-07-23 00:19:16.221779] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:01.560 [2024-07-23 00:19:16.221813] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:15:01.560 [2024-07-23 00:19:16.221967] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:15:01.560 [2024-07-23 00:19:16.221989] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:15:01.560 [2024-07-23 00:19:16.222003] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:15:01.560 [2024-07-23 00:19:16.222019] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:15:01.560 [2024-07-23 00:19:16.222031] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:15:01.560 [2024-07-23 00:19:16.222047] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:15:01.560 [2024-07-23 00:19:16.222057] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:15:01.560 [2024-07-23 00:19:16.222069] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:15:01.560 [2024-07-23 00:19:16.222079] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:15:01.560 [2024-07-23 00:19:16.222092] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:01.560 [2024-07-23 00:19:16.222101] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:15:01.560 [2024-07-23 00:19:16.222114] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.283 ms 00:15:01.560 [2024-07-23 00:19:16.222125] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:01.560 [2024-07-23 00:19:16.222215] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:01.560 [2024-07-23 00:19:16.222226] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:15:01.560 [2024-07-23 00:19:16.222244] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:15:01.560 [2024-07-23 00:19:16.222254] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:01.560 [2024-07-23 00:19:16.222387] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:15:01.560 [2024-07-23 00:19:16.222401] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:15:01.560 [2024-07-23 00:19:16.222416] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:15:01.560 [2024-07-23 00:19:16.222426] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:01.560 [2024-07-23 00:19:16.222451] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:15:01.560 [2024-07-23 00:19:16.222460] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:15:01.560 [2024-07-23 00:19:16.222473] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:15:01.560 [2024-07-23 00:19:16.222482] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:15:01.560 [2024-07-23 00:19:16.222494] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:15:01.560 [2024-07-23 00:19:16.222503] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:15:01.560 [2024-07-23 00:19:16.222515] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:15:01.560 [2024-07-23 00:19:16.222524] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:15:01.560 [2024-07-23 00:19:16.222536] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:15:01.560 [2024-07-23 00:19:16.222549] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:15:01.560 [2024-07-23 00:19:16.222563] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:15:01.560 [2024-07-23 00:19:16.222572] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:01.560 [2024-07-23 00:19:16.222583] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:15:01.560 [2024-07-23 00:19:16.222593] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:15:01.560 [2024-07-23 00:19:16.222604] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:01.560 [2024-07-23 00:19:16.222613] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:15:01.560 [2024-07-23 00:19:16.222626] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:15:01.560 [2024-07-23 00:19:16.222635] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:15:01.560 [2024-07-23 00:19:16.222647] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:15:01.560 [2024-07-23 00:19:16.222664] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:15:01.560 [2024-07-23 00:19:16.222683] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:15:01.560 [2024-07-23 00:19:16.222699] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:15:01.560 [2024-07-23 00:19:16.222717] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:15:01.560 [2024-07-23 00:19:16.222726] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:15:01.560 [2024-07-23 00:19:16.222739] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:15:01.560 [2024-07-23 00:19:16.222749] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:15:01.560 [2024-07-23 00:19:16.222763] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:15:01.560 [2024-07-23 00:19:16.222773] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:15:01.560 [2024-07-23 00:19:16.222784] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:15:01.560 [2024-07-23 00:19:16.222804] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:15:01.560 [2024-07-23 00:19:16.222815] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:15:01.560 [2024-07-23 00:19:16.222824] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:15:01.560 [2024-07-23 00:19:16.222835] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:15:01.560 [2024-07-23 00:19:16.222844] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:15:01.560 [2024-07-23 00:19:16.222855] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:15:01.560 [2024-07-23 00:19:16.222864] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:01.561 [2024-07-23 00:19:16.222875] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:15:01.561 [2024-07-23 00:19:16.222884] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:15:01.561 [2024-07-23 00:19:16.222895] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:01.561 [2024-07-23 00:19:16.222903] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:15:01.561 [2024-07-23 00:19:16.222915] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:15:01.561 [2024-07-23 00:19:16.222926] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:15:01.561 [2024-07-23 00:19:16.222941] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:01.561 [2024-07-23 00:19:16.222954] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:15:01.561 [2024-07-23 00:19:16.222965] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:15:01.561 [2024-07-23 00:19:16.222975] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:15:01.561 [2024-07-23 00:19:16.222987] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:15:01.561 [2024-07-23 00:19:16.222995] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:15:01.561 [2024-07-23 00:19:16.223009] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:15:01.561 [2024-07-23 00:19:16.223022] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:15:01.561 [2024-07-23 00:19:16.223050] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:15:01.561 [2024-07-23 00:19:16.223061] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:15:01.561 [2024-07-23 00:19:16.223073] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:15:01.561 [2024-07-23 00:19:16.223084] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:15:01.561 [2024-07-23 00:19:16.223097] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:15:01.561 [2024-07-23 00:19:16.223107] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:15:01.561 [2024-07-23 00:19:16.223119] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:15:01.561 [2024-07-23 00:19:16.223129] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:15:01.561 [2024-07-23 00:19:16.223144] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:15:01.561 [2024-07-23 00:19:16.223153] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:15:01.561 [2024-07-23 00:19:16.223166] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:15:01.561 [2024-07-23 00:19:16.223176] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:15:01.561 [2024-07-23 00:19:16.223188] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:15:01.561 [2024-07-23 00:19:16.223198] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:15:01.561 [2024-07-23 00:19:16.223210] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:15:01.561 [2024-07-23 00:19:16.223220] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:15:01.561 [2024-07-23 00:19:16.223233] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:15:01.561 [2024-07-23 00:19:16.223244] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:15:01.561 [2024-07-23 00:19:16.223256] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:15:01.561 [2024-07-23 00:19:16.223612] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:15:01.561 [2024-07-23 00:19:16.223788] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:15:01.561 [2024-07-23 00:19:16.223969] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:01.561 [2024-07-23 00:19:16.224089] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:15:01.561 [2024-07-23 00:19:16.224200] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.638 ms 00:15:01.561 [2024-07-23 00:19:16.224246] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:01.561 [2024-07-23 00:19:16.224402] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:15:01.561 [2024-07-23 00:19:16.224543] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:15:04.093 [2024-07-23 00:19:18.498220] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:04.093 [2024-07-23 00:19:18.498509] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:15:04.093 [2024-07-23 00:19:18.498614] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2277.509 ms 00:15:04.093 [2024-07-23 00:19:18.498658] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:04.093 [2024-07-23 00:19:18.510840] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:04.093 [2024-07-23 00:19:18.511131] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:15:04.093 [2024-07-23 00:19:18.511289] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.044 ms 00:15:04.093 [2024-07-23 00:19:18.511369] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:04.093 [2024-07-23 00:19:18.511586] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:04.093 [2024-07-23 00:19:18.511681] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:15:04.093 [2024-07-23 00:19:18.511811] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:15:04.093 [2024-07-23 00:19:18.511842] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:04.093 [2024-07-23 00:19:18.534898] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:04.093 [2024-07-23 00:19:18.534960] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:15:04.093 [2024-07-23 00:19:18.535007] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.958 ms 00:15:04.093 [2024-07-23 00:19:18.535028] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:04.093 [2024-07-23 00:19:18.535111] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:04.093 [2024-07-23 00:19:18.535134] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:15:04.093 [2024-07-23 00:19:18.535152] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:15:04.093 [2024-07-23 00:19:18.535170] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:04.093 [2024-07-23 00:19:18.535826] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:04.093 [2024-07-23 00:19:18.535863] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:15:04.093 [2024-07-23 00:19:18.535882] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.511 ms 00:15:04.093 [2024-07-23 00:19:18.535901] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:04.093 [2024-07-23 00:19:18.536148] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:04.093 [2024-07-23 00:19:18.536201] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:15:04.093 [2024-07-23 00:19:18.536223] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.154 ms 00:15:04.093 [2024-07-23 00:19:18.536246] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:04.093 [2024-07-23 00:19:18.546023] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:04.093 [2024-07-23 00:19:18.546080] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:15:04.093 [2024-07-23 00:19:18.546101] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.690 ms 00:15:04.093 [2024-07-23 00:19:18.546142] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:04.093 [2024-07-23 00:19:18.556001] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:15:04.093 [2024-07-23 00:19:18.577783] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:04.093 [2024-07-23 00:19:18.577859] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:15:04.093 [2024-07-23 00:19:18.577904] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.443 ms 00:15:04.093 [2024-07-23 00:19:18.577923] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:04.093 [2024-07-23 00:19:18.622868] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:04.093 [2024-07-23 00:19:18.622956] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:15:04.093 [2024-07-23 00:19:18.622990] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.917 ms 00:15:04.093 [2024-07-23 00:19:18.623009] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:04.093 [2024-07-23 00:19:18.623342] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:04.093 [2024-07-23 00:19:18.623378] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:15:04.093 [2024-07-23 00:19:18.623422] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.241 ms 00:15:04.093 [2024-07-23 00:19:18.623440] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:04.093 [2024-07-23 00:19:18.627493] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:04.093 [2024-07-23 00:19:18.627542] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:15:04.093 [2024-07-23 00:19:18.627571] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.967 ms 00:15:04.093 [2024-07-23 00:19:18.627587] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:04.093 [2024-07-23 00:19:18.631292] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:04.093 [2024-07-23 00:19:18.631337] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:15:04.093 [2024-07-23 00:19:18.631364] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.595 ms 00:15:04.093 [2024-07-23 00:19:18.631380] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:04.093 [2024-07-23 00:19:18.631777] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:04.093 [2024-07-23 00:19:18.631807] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:15:04.093 [2024-07-23 00:19:18.631830] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.306 ms 00:15:04.093 [2024-07-23 00:19:18.631846] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:04.093 [2024-07-23 00:19:18.667311] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:04.093 [2024-07-23 00:19:18.667395] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:15:04.093 [2024-07-23 00:19:18.667443] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.434 ms 00:15:04.093 [2024-07-23 00:19:18.667468] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:04.093 [2024-07-23 00:19:18.673403] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:04.093 [2024-07-23 00:19:18.673452] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:15:04.093 [2024-07-23 00:19:18.673475] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.826 ms 00:15:04.093 [2024-07-23 00:19:18.673491] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:04.093 [2024-07-23 00:19:18.677427] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:04.093 [2024-07-23 00:19:18.677472] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:15:04.093 [2024-07-23 00:19:18.677494] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.856 ms 00:15:04.093 [2024-07-23 00:19:18.677509] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:04.093 [2024-07-23 00:19:18.681766] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:04.093 [2024-07-23 00:19:18.681800] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:15:04.093 [2024-07-23 00:19:18.681816] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.181 ms 00:15:04.093 [2024-07-23 00:19:18.681827] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:04.093 [2024-07-23 00:19:18.681920] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:04.093 [2024-07-23 00:19:18.681934] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:15:04.093 [2024-07-23 00:19:18.681949] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:15:04.093 [2024-07-23 00:19:18.681959] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:04.093 [2024-07-23 00:19:18.682084] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:04.093 [2024-07-23 00:19:18.682097] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:15:04.093 [2024-07-23 00:19:18.682113] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:15:04.093 [2024-07-23 00:19:18.682124] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:04.093 [2024-07-23 00:19:18.683573] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2477.654 ms, result 0 00:15:04.093 { 00:15:04.093 "name": "ftl0", 00:15:04.093 "uuid": "b9316d50-f8ef-44c6-8603-0870039239e4" 00:15:04.094 } 00:15:04.094 00:19:18 ftl.ftl_fio_basic -- ftl/fio.sh@65 -- # waitforbdev ftl0 00:15:04.094 00:19:18 ftl.ftl_fio_basic -- common/autotest_common.sh@895 -- # local bdev_name=ftl0 00:15:04.094 00:19:18 ftl.ftl_fio_basic -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:15:04.094 00:19:18 ftl.ftl_fio_basic -- common/autotest_common.sh@897 -- # local i 00:15:04.094 00:19:18 ftl.ftl_fio_basic -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:15:04.094 00:19:18 ftl.ftl_fio_basic -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:15:04.094 00:19:18 ftl.ftl_fio_basic -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:15:04.353 00:19:18 ftl.ftl_fio_basic -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:15:04.612 [ 00:15:04.612 { 00:15:04.612 "name": "ftl0", 00:15:04.612 "aliases": [ 00:15:04.612 "b9316d50-f8ef-44c6-8603-0870039239e4" 00:15:04.612 ], 00:15:04.612 "product_name": "FTL disk", 00:15:04.612 "block_size": 4096, 00:15:04.612 "num_blocks": 20971520, 00:15:04.612 "uuid": "b9316d50-f8ef-44c6-8603-0870039239e4", 00:15:04.612 "assigned_rate_limits": { 00:15:04.612 "rw_ios_per_sec": 0, 00:15:04.612 "rw_mbytes_per_sec": 0, 00:15:04.612 "r_mbytes_per_sec": 0, 00:15:04.612 "w_mbytes_per_sec": 0 00:15:04.612 }, 00:15:04.612 "claimed": false, 00:15:04.612 "zoned": false, 00:15:04.612 "supported_io_types": { 00:15:04.612 "read": true, 00:15:04.612 "write": true, 00:15:04.612 "unmap": true, 00:15:04.612 "write_zeroes": true, 00:15:04.612 "flush": true, 00:15:04.612 "reset": false, 00:15:04.612 "compare": false, 00:15:04.612 "compare_and_write": false, 00:15:04.612 "abort": false, 00:15:04.612 "nvme_admin": false, 00:15:04.612 "nvme_io": false 00:15:04.612 }, 00:15:04.612 "driver_specific": { 00:15:04.612 "ftl": { 00:15:04.612 "base_bdev": "06af267e-c09b-4cae-9a69-c432eecb8225", 00:15:04.612 "cache": "nvc0n1p0" 00:15:04.612 } 00:15:04.612 } 00:15:04.612 } 00:15:04.612 ] 00:15:04.612 00:19:19 ftl.ftl_fio_basic -- common/autotest_common.sh@903 -- # return 0 00:15:04.612 00:19:19 ftl.ftl_fio_basic -- ftl/fio.sh@68 -- # echo '{"subsystems": [' 00:15:04.612 00:19:19 ftl.ftl_fio_basic -- ftl/fio.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:15:04.612 00:19:19 ftl.ftl_fio_basic -- ftl/fio.sh@70 -- # echo ']}' 00:15:04.612 00:19:19 ftl.ftl_fio_basic -- ftl/fio.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:15:04.873 [2024-07-23 00:19:19.399271] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:04.873 [2024-07-23 00:19:19.399329] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:15:04.873 [2024-07-23 00:19:19.399345] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:15:04.873 [2024-07-23 00:19:19.399374] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:04.873 [2024-07-23 00:19:19.399444] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:15:04.873 [2024-07-23 00:19:19.400170] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:04.873 [2024-07-23 00:19:19.400188] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:15:04.873 [2024-07-23 00:19:19.400204] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.707 ms 00:15:04.873 [2024-07-23 00:19:19.400216] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:04.873 [2024-07-23 00:19:19.401149] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:04.873 [2024-07-23 00:19:19.401191] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:15:04.873 [2024-07-23 00:19:19.401205] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.866 ms 00:15:04.873 [2024-07-23 00:19:19.401231] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:04.873 [2024-07-23 00:19:19.403835] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:04.873 [2024-07-23 00:19:19.403858] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:15:04.873 [2024-07-23 00:19:19.403871] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.546 ms 00:15:04.873 [2024-07-23 00:19:19.403881] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:04.873 [2024-07-23 00:19:19.408965] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:04.873 [2024-07-23 00:19:19.408997] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:15:04.873 [2024-07-23 00:19:19.409014] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.020 ms 00:15:04.873 [2024-07-23 00:19:19.409041] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:04.873 [2024-07-23 00:19:19.410652] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:04.873 [2024-07-23 00:19:19.410687] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:15:04.873 [2024-07-23 00:19:19.410706] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.467 ms 00:15:04.873 [2024-07-23 00:19:19.410715] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:04.873 [2024-07-23 00:19:19.415703] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:04.873 [2024-07-23 00:19:19.415755] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:15:04.873 [2024-07-23 00:19:19.415773] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.941 ms 00:15:04.873 [2024-07-23 00:19:19.415786] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:04.873 [2024-07-23 00:19:19.416057] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:04.873 [2024-07-23 00:19:19.416070] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:15:04.873 [2024-07-23 00:19:19.416083] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.212 ms 00:15:04.873 [2024-07-23 00:19:19.416093] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:04.873 [2024-07-23 00:19:19.417956] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:04.873 [2024-07-23 00:19:19.417989] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:15:04.873 [2024-07-23 00:19:19.418003] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.816 ms 00:15:04.873 [2024-07-23 00:19:19.418012] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:04.873 [2024-07-23 00:19:19.419498] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:04.873 [2024-07-23 00:19:19.419528] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:15:04.873 [2024-07-23 00:19:19.419545] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.431 ms 00:15:04.873 [2024-07-23 00:19:19.419554] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:04.874 [2024-07-23 00:19:19.420824] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:04.874 [2024-07-23 00:19:19.420857] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:15:04.874 [2024-07-23 00:19:19.420871] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.212 ms 00:15:04.874 [2024-07-23 00:19:19.420881] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:04.874 [2024-07-23 00:19:19.422225] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:04.874 [2024-07-23 00:19:19.422354] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:15:04.874 [2024-07-23 00:19:19.422430] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.183 ms 00:15:04.874 [2024-07-23 00:19:19.422465] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:04.874 [2024-07-23 00:19:19.422542] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:15:04.874 [2024-07-23 00:19:19.422664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:15:04.874 [2024-07-23 00:19:19.422747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:15:04.874 [2024-07-23 00:19:19.422761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:15:04.874 [2024-07-23 00:19:19.422774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:15:04.874 [2024-07-23 00:19:19.422785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:15:04.874 [2024-07-23 00:19:19.422801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:15:04.874 [2024-07-23 00:19:19.422812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:15:04.874 [2024-07-23 00:19:19.422825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:15:04.874 [2024-07-23 00:19:19.422836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:15:04.874 [2024-07-23 00:19:19.422849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:15:04.874 [2024-07-23 00:19:19.422860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:15:04.874 [2024-07-23 00:19:19.422873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:15:04.874 [2024-07-23 00:19:19.422885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:15:04.874 [2024-07-23 00:19:19.422898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:15:04.874 [2024-07-23 00:19:19.422909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:15:04.874 [2024-07-23 00:19:19.422922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:15:04.874 [2024-07-23 00:19:19.422932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:15:04.874 [2024-07-23 00:19:19.422947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:15:04.874 [2024-07-23 00:19:19.422958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:15:04.874 [2024-07-23 00:19:19.422971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:15:04.874 [2024-07-23 00:19:19.422981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:15:04.874 [2024-07-23 00:19:19.422997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:15:04.874 [2024-07-23 00:19:19.423007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:15:04.874 [2024-07-23 00:19:19.423021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:15:04.874 [2024-07-23 00:19:19.423031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:15:04.874 [2024-07-23 00:19:19.423044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:15:04.874 [2024-07-23 00:19:19.423055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:15:04.874 [2024-07-23 00:19:19.423068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:15:04.874 [2024-07-23 00:19:19.423078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:15:04.874 [2024-07-23 00:19:19.423110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:15:04.874 [2024-07-23 00:19:19.423120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:15:04.874 [2024-07-23 00:19:19.423135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:15:04.874 [2024-07-23 00:19:19.423145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:15:04.874 [2024-07-23 00:19:19.423158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:15:04.874 [2024-07-23 00:19:19.423172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:15:04.874 [2024-07-23 00:19:19.423187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:15:04.874 [2024-07-23 00:19:19.423198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:15:04.874 [2024-07-23 00:19:19.423213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:15:04.874 [2024-07-23 00:19:19.423224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:15:04.874 [2024-07-23 00:19:19.423237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:15:04.874 [2024-07-23 00:19:19.423247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:15:04.874 [2024-07-23 00:19:19.423278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:15:04.874 [2024-07-23 00:19:19.423289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:15:04.874 [2024-07-23 00:19:19.423303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:15:04.874 [2024-07-23 00:19:19.423314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:15:04.874 [2024-07-23 00:19:19.423327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:15:04.874 [2024-07-23 00:19:19.423337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:15:04.874 [2024-07-23 00:19:19.423351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:15:04.874 [2024-07-23 00:19:19.423361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:15:04.874 [2024-07-23 00:19:19.423375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:15:04.874 [2024-07-23 00:19:19.423385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:15:04.874 [2024-07-23 00:19:19.423399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:15:04.874 [2024-07-23 00:19:19.423410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:15:04.874 [2024-07-23 00:19:19.423425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:15:04.874 [2024-07-23 00:19:19.423436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:15:04.874 [2024-07-23 00:19:19.423449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:15:04.874 [2024-07-23 00:19:19.423463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:15:04.874 [2024-07-23 00:19:19.423476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:15:04.874 [2024-07-23 00:19:19.423487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:15:04.874 [2024-07-23 00:19:19.423500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:15:04.874 [2024-07-23 00:19:19.423510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:15:04.874 [2024-07-23 00:19:19.423523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:15:04.874 [2024-07-23 00:19:19.423534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:15:04.874 [2024-07-23 00:19:19.423546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:15:04.874 [2024-07-23 00:19:19.423557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:15:04.874 [2024-07-23 00:19:19.423570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:15:04.874 [2024-07-23 00:19:19.423582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:15:04.874 [2024-07-23 00:19:19.423596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:15:04.874 [2024-07-23 00:19:19.423606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:15:04.874 [2024-07-23 00:19:19.423623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:15:04.874 [2024-07-23 00:19:19.423633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:15:04.874 [2024-07-23 00:19:19.423646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:15:04.874 [2024-07-23 00:19:19.423657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:15:04.874 [2024-07-23 00:19:19.423670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:15:04.874 [2024-07-23 00:19:19.423680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:15:04.874 [2024-07-23 00:19:19.423693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:15:04.874 [2024-07-23 00:19:19.423703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:15:04.874 [2024-07-23 00:19:19.423716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:15:04.874 [2024-07-23 00:19:19.423726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:15:04.874 [2024-07-23 00:19:19.423740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:15:04.874 [2024-07-23 00:19:19.423750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:15:04.874 [2024-07-23 00:19:19.423763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:15:04.875 [2024-07-23 00:19:19.423774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:15:04.875 [2024-07-23 00:19:19.423786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:15:04.875 [2024-07-23 00:19:19.423797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:15:04.875 [2024-07-23 00:19:19.423812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:15:04.875 [2024-07-23 00:19:19.423822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:15:04.875 [2024-07-23 00:19:19.423835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:15:04.875 [2024-07-23 00:19:19.423846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:15:04.875 [2024-07-23 00:19:19.423858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:15:04.875 [2024-07-23 00:19:19.423869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:15:04.875 [2024-07-23 00:19:19.423881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:15:04.875 [2024-07-23 00:19:19.423892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:15:04.875 [2024-07-23 00:19:19.423906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:15:04.875 [2024-07-23 00:19:19.423916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:15:04.875 [2024-07-23 00:19:19.423930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:15:04.875 [2024-07-23 00:19:19.423941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:15:04.875 [2024-07-23 00:19:19.423954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:15:04.875 [2024-07-23 00:19:19.423966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:15:04.875 [2024-07-23 00:19:19.423981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:15:04.875 [2024-07-23 00:19:19.423999] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:15:04.875 [2024-07-23 00:19:19.424014] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: b9316d50-f8ef-44c6-8603-0870039239e4 00:15:04.875 [2024-07-23 00:19:19.424025] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:15:04.875 [2024-07-23 00:19:19.424040] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:15:04.875 [2024-07-23 00:19:19.424051] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:15:04.875 [2024-07-23 00:19:19.424063] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:15:04.875 [2024-07-23 00:19:19.424073] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:15:04.875 [2024-07-23 00:19:19.424085] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:15:04.875 [2024-07-23 00:19:19.424095] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:15:04.875 [2024-07-23 00:19:19.424106] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:15:04.875 [2024-07-23 00:19:19.424115] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:15:04.875 [2024-07-23 00:19:19.424127] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:04.875 [2024-07-23 00:19:19.424137] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:15:04.875 [2024-07-23 00:19:19.424150] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.590 ms 00:15:04.875 [2024-07-23 00:19:19.424174] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:04.875 [2024-07-23 00:19:19.426063] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:04.875 [2024-07-23 00:19:19.426085] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:15:04.875 [2024-07-23 00:19:19.426102] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.846 ms 00:15:04.875 [2024-07-23 00:19:19.426124] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:04.875 [2024-07-23 00:19:19.426280] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:04.875 [2024-07-23 00:19:19.426291] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:15:04.875 [2024-07-23 00:19:19.426305] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.094 ms 00:15:04.875 [2024-07-23 00:19:19.426315] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:04.875 [2024-07-23 00:19:19.433498] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:04.875 [2024-07-23 00:19:19.433522] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:15:04.875 [2024-07-23 00:19:19.433536] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:04.875 [2024-07-23 00:19:19.433547] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:04.875 [2024-07-23 00:19:19.433626] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:04.875 [2024-07-23 00:19:19.433637] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:15:04.875 [2024-07-23 00:19:19.433650] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:04.875 [2024-07-23 00:19:19.433660] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:04.875 [2024-07-23 00:19:19.433776] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:04.875 [2024-07-23 00:19:19.433790] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:15:04.875 [2024-07-23 00:19:19.433806] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:04.875 [2024-07-23 00:19:19.433816] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:04.875 [2024-07-23 00:19:19.433866] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:04.875 [2024-07-23 00:19:19.433891] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:15:04.875 [2024-07-23 00:19:19.433905] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:04.875 [2024-07-23 00:19:19.433914] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:04.875 [2024-07-23 00:19:19.446242] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:04.875 [2024-07-23 00:19:19.446306] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:15:04.875 [2024-07-23 00:19:19.446322] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:04.875 [2024-07-23 00:19:19.446348] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:04.875 [2024-07-23 00:19:19.454775] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:04.875 [2024-07-23 00:19:19.454813] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:15:04.875 [2024-07-23 00:19:19.454829] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:04.875 [2024-07-23 00:19:19.454840] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:04.875 [2024-07-23 00:19:19.454949] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:04.875 [2024-07-23 00:19:19.454979] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:15:04.875 [2024-07-23 00:19:19.454995] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:04.875 [2024-07-23 00:19:19.455017] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:04.875 [2024-07-23 00:19:19.455111] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:04.875 [2024-07-23 00:19:19.455121] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:15:04.875 [2024-07-23 00:19:19.455135] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:04.875 [2024-07-23 00:19:19.455145] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:04.875 [2024-07-23 00:19:19.455323] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:04.875 [2024-07-23 00:19:19.455341] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:15:04.875 [2024-07-23 00:19:19.455380] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:04.875 [2024-07-23 00:19:19.455391] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:04.875 [2024-07-23 00:19:19.455470] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:04.875 [2024-07-23 00:19:19.455482] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:15:04.875 [2024-07-23 00:19:19.455495] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:04.875 [2024-07-23 00:19:19.455505] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:04.875 [2024-07-23 00:19:19.455581] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:04.875 [2024-07-23 00:19:19.455593] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:15:04.875 [2024-07-23 00:19:19.455613] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:04.875 [2024-07-23 00:19:19.455622] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:04.875 [2024-07-23 00:19:19.455697] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:04.875 [2024-07-23 00:19:19.455709] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:15:04.875 [2024-07-23 00:19:19.455736] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:04.875 [2024-07-23 00:19:19.455746] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:04.875 [2024-07-23 00:19:19.456013] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 56.799 ms, result 0 00:15:04.875 true 00:15:04.875 00:19:19 ftl.ftl_fio_basic -- ftl/fio.sh@75 -- # killprocess 87583 00:15:04.875 00:19:19 ftl.ftl_fio_basic -- common/autotest_common.sh@946 -- # '[' -z 87583 ']' 00:15:04.875 00:19:19 ftl.ftl_fio_basic -- common/autotest_common.sh@950 -- # kill -0 87583 00:15:04.875 00:19:19 ftl.ftl_fio_basic -- common/autotest_common.sh@951 -- # uname 00:15:04.875 00:19:19 ftl.ftl_fio_basic -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:15:04.875 00:19:19 ftl.ftl_fio_basic -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 87583 00:15:04.875 killing process with pid 87583 00:15:04.875 00:19:19 ftl.ftl_fio_basic -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:15:04.875 00:19:19 ftl.ftl_fio_basic -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:15:04.875 00:19:19 ftl.ftl_fio_basic -- common/autotest_common.sh@964 -- # echo 'killing process with pid 87583' 00:15:04.875 00:19:19 ftl.ftl_fio_basic -- common/autotest_common.sh@965 -- # kill 87583 00:15:04.875 00:19:19 ftl.ftl_fio_basic -- common/autotest_common.sh@970 -- # wait 87583 00:15:08.189 00:19:22 ftl.ftl_fio_basic -- ftl/fio.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:15:08.189 00:19:22 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:15:08.189 00:19:22 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify 00:15:08.189 00:19:22 ftl.ftl_fio_basic -- common/autotest_common.sh@720 -- # xtrace_disable 00:15:08.189 00:19:22 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:15:08.189 00:19:22 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:15:08.189 00:19:22 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:15:08.189 00:19:22 ftl.ftl_fio_basic -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:15:08.189 00:19:22 ftl.ftl_fio_basic -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:08.189 00:19:22 ftl.ftl_fio_basic -- common/autotest_common.sh@1335 -- # local sanitizers 00:15:08.189 00:19:22 ftl.ftl_fio_basic -- common/autotest_common.sh@1336 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:08.189 00:19:22 ftl.ftl_fio_basic -- common/autotest_common.sh@1337 -- # shift 00:15:08.189 00:19:22 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # local asan_lib= 00:15:08.189 00:19:22 ftl.ftl_fio_basic -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:15:08.189 00:19:22 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:08.189 00:19:22 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # grep libasan 00:15:08.189 00:19:22 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:15:08.189 00:19:22 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # asan_lib=/usr/lib64/libasan.so.8 00:15:08.189 00:19:22 ftl.ftl_fio_basic -- common/autotest_common.sh@1342 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:15:08.189 00:19:22 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # break 00:15:08.189 00:19:22 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:15:08.189 00:19:22 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:15:08.189 test: (g=0): rw=randwrite, bs=(R) 68.0KiB-68.0KiB, (W) 68.0KiB-68.0KiB, (T) 68.0KiB-68.0KiB, ioengine=spdk_bdev, iodepth=1 00:15:08.189 fio-3.35 00:15:08.189 Starting 1 thread 00:15:13.467 00:15:13.467 test: (groupid=0, jobs=1): err= 0: pid=87746: Tue Jul 23 00:19:27 2024 00:15:13.467 read: IOPS=861, BW=57.2MiB/s (60.0MB/s)(255MiB/4450msec) 00:15:13.467 slat (nsec): min=4495, max=40714, avg=8874.77, stdev=3617.45 00:15:13.467 clat (usec): min=326, max=3533, avg=523.20, stdev=76.57 00:15:13.467 lat (usec): min=331, max=3542, avg=532.08, stdev=77.65 00:15:13.467 clat percentiles (usec): 00:15:13.467 | 1.00th=[ 392], 5.00th=[ 412], 10.00th=[ 457], 20.00th=[ 474], 00:15:13.467 | 30.00th=[ 490], 40.00th=[ 502], 50.00th=[ 529], 60.00th=[ 537], 00:15:13.467 | 70.00th=[ 570], 80.00th=[ 578], 90.00th=[ 586], 95.00th=[ 603], 00:15:13.467 | 99.00th=[ 668], 99.50th=[ 709], 99.90th=[ 734], 99.95th=[ 857], 00:15:13.467 | 99.99th=[ 3523] 00:15:13.467 write: IOPS=867, BW=57.6MiB/s (60.4MB/s)(256MiB/4446msec); 0 zone resets 00:15:13.467 slat (usec): min=15, max=130, avg=26.65, stdev= 7.98 00:15:13.467 clat (usec): min=364, max=1083, avg=585.55, stdev=77.74 00:15:13.467 lat (usec): min=389, max=1115, avg=612.20, stdev=81.07 00:15:13.467 clat percentiles (usec): 00:15:13.467 | 1.00th=[ 429], 5.00th=[ 478], 10.00th=[ 486], 20.00th=[ 537], 00:15:13.467 | 30.00th=[ 545], 40.00th=[ 578], 50.00th=[ 586], 60.00th=[ 594], 00:15:13.467 | 70.00th=[ 603], 80.00th=[ 627], 90.00th=[ 676], 95.00th=[ 685], 00:15:13.467 | 99.00th=[ 914], 99.50th=[ 988], 99.90th=[ 1057], 99.95th=[ 1090], 00:15:13.467 | 99.99th=[ 1090] 00:15:13.467 bw ( KiB/s): min=54808, max=64872, per=99.10%, avg=58446.00, stdev=3826.52, samples=8 00:15:13.467 iops : min= 806, max= 954, avg=859.50, stdev=56.27, samples=8 00:15:13.467 lat (usec) : 500=26.26%, 750=72.71%, 1000=0.82% 00:15:13.467 lat (msec) : 2=0.20%, 4=0.01% 00:15:13.467 cpu : usr=98.94%, sys=0.18%, ctx=11, majf=0, minf=1326 00:15:13.467 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:13.467 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:13.467 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:13.467 issued rwts: total=3833,3856,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:13.467 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:13.467 00:15:13.467 Run status group 0 (all jobs): 00:15:13.467 READ: bw=57.2MiB/s (60.0MB/s), 57.2MiB/s-57.2MiB/s (60.0MB/s-60.0MB/s), io=255MiB (267MB), run=4450-4450msec 00:15:13.467 WRITE: bw=57.6MiB/s (60.4MB/s), 57.6MiB/s-57.6MiB/s (60.4MB/s-60.4MB/s), io=256MiB (269MB), run=4446-4446msec 00:15:13.726 ----------------------------------------------------- 00:15:13.726 Suppressions used: 00:15:13.726 count bytes template 00:15:13.726 1 5 /usr/src/fio/parse.c 00:15:13.726 1 8 libtcmalloc_minimal.so 00:15:13.726 1 904 libcrypto.so 00:15:13.726 ----------------------------------------------------- 00:15:13.726 00:15:13.726 00:19:28 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify 00:15:13.726 00:19:28 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:13.726 00:19:28 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:15:13.984 00:19:28 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:15:13.984 00:19:28 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-j2 00:15:13.984 00:19:28 ftl.ftl_fio_basic -- common/autotest_common.sh@720 -- # xtrace_disable 00:15:13.984 00:19:28 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:15:13.984 00:19:28 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:15:13.984 00:19:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:15:13.984 00:19:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:15:13.984 00:19:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:13.984 00:19:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1335 -- # local sanitizers 00:15:13.984 00:19:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1336 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:13.984 00:19:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1337 -- # shift 00:15:13.984 00:19:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # local asan_lib= 00:15:13.984 00:19:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:15:13.984 00:19:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:13.984 00:19:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # grep libasan 00:15:13.984 00:19:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:15:13.984 00:19:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # asan_lib=/usr/lib64/libasan.so.8 00:15:13.984 00:19:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1342 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:15:13.984 00:19:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # break 00:15:13.984 00:19:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:15:13.984 00:19:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:15:13.984 first_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:15:13.984 second_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:15:13.984 fio-3.35 00:15:13.984 Starting 2 threads 00:15:40.526 00:15:40.526 first_half: (groupid=0, jobs=1): err= 0: pid=87838: Tue Jul 23 00:19:54 2024 00:15:40.526 read: IOPS=2576, BW=10.1MiB/s (10.6MB/s)(256MiB/25414msec) 00:15:40.526 slat (usec): min=3, max=113, avg= 9.35, stdev= 4.86 00:15:40.526 clat (usec): min=576, max=295104, avg=42003.14, stdev=27403.25 00:15:40.526 lat (usec): min=580, max=295121, avg=42012.49, stdev=27404.29 00:15:40.526 clat percentiles (msec): 00:15:40.526 | 1.00th=[ 8], 5.00th=[ 32], 10.00th=[ 33], 20.00th=[ 33], 00:15:40.526 | 30.00th=[ 34], 40.00th=[ 37], 50.00th=[ 37], 60.00th=[ 37], 00:15:40.526 | 70.00th=[ 38], 80.00th=[ 41], 90.00th=[ 45], 95.00th=[ 88], 00:15:40.526 | 99.00th=[ 190], 99.50th=[ 205], 99.90th=[ 226], 99.95th=[ 257], 00:15:40.526 | 99.99th=[ 288] 00:15:40.526 write: IOPS=2581, BW=10.1MiB/s (10.6MB/s)(256MiB/25387msec); 0 zone resets 00:15:40.526 slat (usec): min=4, max=539, avg= 9.30, stdev= 6.73 00:15:40.526 clat (usec): min=432, max=45032, avg=7625.61, stdev=7076.94 00:15:40.526 lat (usec): min=445, max=45049, avg=7634.91, stdev=7077.21 00:15:40.526 clat percentiles (usec): 00:15:40.526 | 1.00th=[ 1090], 5.00th=[ 1467], 10.00th=[ 1795], 20.00th=[ 3163], 00:15:40.526 | 30.00th=[ 4359], 40.00th=[ 5538], 50.00th=[ 6390], 60.00th=[ 7111], 00:15:40.526 | 70.00th=[ 7767], 80.00th=[ 9372], 90.00th=[12780], 95.00th=[20579], 00:15:40.526 | 99.00th=[41157], 99.50th=[42206], 99.90th=[43779], 99.95th=[43779], 00:15:40.526 | 99.99th=[44303] 00:15:40.526 bw ( KiB/s): min= 752, max=45968, per=100.00%, avg=21705.67, stdev=12956.30, samples=24 00:15:40.526 iops : min= 188, max=11492, avg=5426.42, stdev=3239.08, samples=24 00:15:40.526 lat (usec) : 500=0.01%, 750=0.05%, 1000=0.23% 00:15:40.526 lat (msec) : 2=6.02%, 4=7.35%, 10=28.94%, 20=6.35%, 50=47.21% 00:15:40.526 lat (msec) : 100=1.63%, 250=2.18%, 500=0.03% 00:15:40.526 cpu : usr=99.17%, sys=0.20%, ctx=49, majf=0, minf=5609 00:15:40.526 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:15:40.526 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:40.526 complete : 0=0.0%, 4=99.3%, 8=0.6%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:40.526 issued rwts: total=65475,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:40.526 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:40.527 second_half: (groupid=0, jobs=1): err= 0: pid=87839: Tue Jul 23 00:19:54 2024 00:15:40.527 read: IOPS=2592, BW=10.1MiB/s (10.6MB/s)(256MiB/25257msec) 00:15:40.527 slat (nsec): min=3514, max=53865, avg=8684.14, stdev=3316.13 00:15:40.527 clat (msec): min=9, max=217, avg=42.27, stdev=24.06 00:15:40.527 lat (msec): min=9, max=217, avg=42.28, stdev=24.06 00:15:40.527 clat percentiles (msec): 00:15:40.527 | 1.00th=[ 31], 5.00th=[ 32], 10.00th=[ 33], 20.00th=[ 33], 00:15:40.527 | 30.00th=[ 36], 40.00th=[ 37], 50.00th=[ 37], 60.00th=[ 37], 00:15:40.527 | 70.00th=[ 38], 80.00th=[ 42], 90.00th=[ 46], 95.00th=[ 78], 00:15:40.527 | 99.00th=[ 176], 99.50th=[ 186], 99.90th=[ 207], 99.95th=[ 215], 00:15:40.527 | 99.99th=[ 218] 00:15:40.527 write: IOPS=2609, BW=10.2MiB/s (10.7MB/s)(256MiB/25113msec); 0 zone resets 00:15:40.527 slat (usec): min=4, max=811, avg= 8.99, stdev= 7.83 00:15:40.527 clat (usec): min=494, max=45187, avg=7057.92, stdev=3918.16 00:15:40.527 lat (usec): min=513, max=45194, avg=7066.91, stdev=3918.50 00:15:40.527 clat percentiles (usec): 00:15:40.527 | 1.00th=[ 1270], 5.00th=[ 2089], 10.00th=[ 2966], 20.00th=[ 4113], 00:15:40.527 | 30.00th=[ 4817], 40.00th=[ 5669], 50.00th=[ 6325], 60.00th=[ 7308], 00:15:40.527 | 70.00th=[ 8160], 80.00th=[ 9634], 90.00th=[12125], 95.00th=[13304], 00:15:40.527 | 99.00th=[19006], 99.50th=[24511], 99.90th=[41681], 99.95th=[43254], 00:15:40.527 | 99.99th=[44827] 00:15:40.527 bw ( KiB/s): min= 96, max=47696, per=100.00%, avg=23653.09, stdev=16859.07, samples=22 00:15:40.527 iops : min= 24, max=11924, avg=5913.27, stdev=4214.77, samples=22 00:15:40.527 lat (usec) : 500=0.01%, 750=0.05%, 1000=0.10% 00:15:40.527 lat (msec) : 2=2.06%, 4=7.34%, 10=31.31%, 20=8.75%, 50=46.49% 00:15:40.527 lat (msec) : 100=1.97%, 250=1.94% 00:15:40.527 cpu : usr=99.17%, sys=0.17%, ctx=38, majf=0, minf=5525 00:15:40.527 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:15:40.527 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:40.527 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:40.527 issued rwts: total=65490,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:40.527 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:40.527 00:15:40.527 Run status group 0 (all jobs): 00:15:40.527 READ: bw=20.1MiB/s (21.1MB/s), 10.1MiB/s-10.1MiB/s (10.6MB/s-10.6MB/s), io=512MiB (536MB), run=25257-25414msec 00:15:40.527 WRITE: bw=20.2MiB/s (21.1MB/s), 10.1MiB/s-10.2MiB/s (10.6MB/s-10.7MB/s), io=512MiB (537MB), run=25113-25387msec 00:15:41.462 ----------------------------------------------------- 00:15:41.462 Suppressions used: 00:15:41.462 count bytes template 00:15:41.462 2 10 /usr/src/fio/parse.c 00:15:41.462 4 384 /usr/src/fio/iolog.c 00:15:41.462 1 8 libtcmalloc_minimal.so 00:15:41.462 1 904 libcrypto.so 00:15:41.462 ----------------------------------------------------- 00:15:41.462 00:15:41.462 00:19:55 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-j2 00:15:41.462 00:19:55 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:41.462 00:19:55 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:15:41.462 00:19:55 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:15:41.462 00:19:55 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-depth128 00:15:41.462 00:19:55 ftl.ftl_fio_basic -- common/autotest_common.sh@720 -- # xtrace_disable 00:15:41.462 00:19:55 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:15:41.462 00:19:55 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:15:41.462 00:19:55 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:15:41.462 00:19:55 ftl.ftl_fio_basic -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:15:41.462 00:19:55 ftl.ftl_fio_basic -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:41.462 00:19:55 ftl.ftl_fio_basic -- common/autotest_common.sh@1335 -- # local sanitizers 00:15:41.462 00:19:55 ftl.ftl_fio_basic -- common/autotest_common.sh@1336 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:41.462 00:19:55 ftl.ftl_fio_basic -- common/autotest_common.sh@1337 -- # shift 00:15:41.462 00:19:55 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # local asan_lib= 00:15:41.462 00:19:55 ftl.ftl_fio_basic -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:15:41.462 00:19:55 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:41.462 00:19:55 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # grep libasan 00:15:41.462 00:19:55 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:15:41.462 00:19:56 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # asan_lib=/usr/lib64/libasan.so.8 00:15:41.462 00:19:56 ftl.ftl_fio_basic -- common/autotest_common.sh@1342 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:15:41.462 00:19:56 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # break 00:15:41.462 00:19:56 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:15:41.462 00:19:56 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:15:41.720 test: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:15:41.720 fio-3.35 00:15:41.720 Starting 1 thread 00:15:56.590 00:15:56.590 test: (groupid=0, jobs=1): err= 0: pid=88152: Tue Jul 23 00:20:10 2024 00:15:56.590 read: IOPS=7175, BW=28.0MiB/s (29.4MB/s)(255MiB/9087msec) 00:15:56.590 slat (nsec): min=3442, max=65552, avg=5986.26, stdev=2807.57 00:15:56.590 clat (usec): min=638, max=41237, avg=17829.40, stdev=2432.54 00:15:56.590 lat (usec): min=643, max=41246, avg=17835.38, stdev=2433.64 00:15:56.590 clat percentiles (usec): 00:15:56.590 | 1.00th=[15401], 5.00th=[15664], 10.00th=[15795], 20.00th=[16057], 00:15:56.590 | 30.00th=[16188], 40.00th=[16450], 50.00th=[16909], 60.00th=[17433], 00:15:56.590 | 70.00th=[17957], 80.00th=[20841], 90.00th=[21365], 95.00th=[21627], 00:15:56.590 | 99.00th=[23200], 99.50th=[26870], 99.90th=[36439], 99.95th=[37487], 00:15:56.590 | 99.99th=[40109] 00:15:56.590 write: IOPS=14.0k, BW=54.6MiB/s (57.2MB/s)(256MiB/4691msec); 0 zone resets 00:15:56.590 slat (usec): min=4, max=687, avg= 7.61, stdev= 7.27 00:15:56.590 clat (usec): min=539, max=51109, avg=9116.35, stdev=11160.56 00:15:56.590 lat (usec): min=547, max=51115, avg=9123.96, stdev=11160.56 00:15:56.590 clat percentiles (usec): 00:15:56.590 | 1.00th=[ 914], 5.00th=[ 1090], 10.00th=[ 1221], 20.00th=[ 1418], 00:15:56.590 | 30.00th=[ 1582], 40.00th=[ 1942], 50.00th=[ 5932], 60.00th=[ 6980], 00:15:56.590 | 70.00th=[ 7963], 80.00th=[ 9896], 90.00th=[33162], 95.00th=[34866], 00:15:56.590 | 99.00th=[36963], 99.50th=[38011], 99.90th=[47973], 99.95th=[50070], 00:15:56.590 | 99.99th=[51119] 00:15:56.590 bw ( KiB/s): min=18192, max=76216, per=93.80%, avg=52416.60, stdev=15114.35, samples=10 00:15:56.590 iops : min= 4548, max=19054, avg=13104.10, stdev=3778.56, samples=10 00:15:56.590 lat (usec) : 750=0.03%, 1000=1.29% 00:15:56.590 lat (msec) : 2=18.94%, 4=0.90%, 10=19.21%, 20=38.41%, 50=21.19% 00:15:56.590 lat (msec) : 100=0.02% 00:15:56.590 cpu : usr=98.85%, sys=0.38%, ctx=22, majf=0, minf=5578 00:15:56.590 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:15:56.590 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:56.590 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:56.590 issued rwts: total=65202,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:56.590 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:56.590 00:15:56.590 Run status group 0 (all jobs): 00:15:56.590 READ: bw=28.0MiB/s (29.4MB/s), 28.0MiB/s-28.0MiB/s (29.4MB/s-29.4MB/s), io=255MiB (267MB), run=9087-9087msec 00:15:56.590 WRITE: bw=54.6MiB/s (57.2MB/s), 54.6MiB/s-54.6MiB/s (57.2MB/s-57.2MB/s), io=256MiB (268MB), run=4691-4691msec 00:15:56.847 ----------------------------------------------------- 00:15:56.847 Suppressions used: 00:15:56.847 count bytes template 00:15:56.847 1 5 /usr/src/fio/parse.c 00:15:56.848 2 192 /usr/src/fio/iolog.c 00:15:56.848 1 8 libtcmalloc_minimal.so 00:15:56.848 1 904 libcrypto.so 00:15:56.848 ----------------------------------------------------- 00:15:56.848 00:15:57.106 00:20:11 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-depth128 00:15:57.106 00:20:11 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:57.106 00:20:11 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:15:57.106 00:20:11 ftl.ftl_fio_basic -- ftl/fio.sh@84 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:15:57.106 00:20:11 ftl.ftl_fio_basic -- ftl/fio.sh@85 -- # remove_shm 00:15:57.106 Remove shared memory files 00:15:57.106 00:20:11 ftl.ftl_fio_basic -- ftl/common.sh@204 -- # echo Remove shared memory files 00:15:57.106 00:20:11 ftl.ftl_fio_basic -- ftl/common.sh@205 -- # rm -f rm -f 00:15:57.106 00:20:11 ftl.ftl_fio_basic -- ftl/common.sh@206 -- # rm -f rm -f 00:15:57.106 00:20:11 ftl.ftl_fio_basic -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid73846 /dev/shm/spdk_tgt_trace.pid86556 00:15:57.106 00:20:11 ftl.ftl_fio_basic -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:15:57.106 00:20:11 ftl.ftl_fio_basic -- ftl/common.sh@209 -- # rm -f rm -f 00:15:57.106 00:15:57.106 real 0m59.023s 00:15:57.106 user 2m11.838s 00:15:57.106 sys 0m3.464s 00:15:57.106 00:20:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1122 -- # xtrace_disable 00:15:57.106 ************************************ 00:15:57.106 END TEST ftl_fio_basic 00:15:57.107 ************************************ 00:15:57.107 00:20:11 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:15:57.107 00:20:11 ftl -- ftl/ftl.sh@74 -- # run_test ftl_bdevperf /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:15:57.107 00:20:11 ftl -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:15:57.107 00:20:11 ftl -- common/autotest_common.sh@1103 -- # xtrace_disable 00:15:57.107 00:20:11 ftl -- common/autotest_common.sh@10 -- # set +x 00:15:57.107 ************************************ 00:15:57.107 START TEST ftl_bdevperf 00:15:57.107 ************************************ 00:15:57.107 00:20:11 ftl.ftl_bdevperf -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:15:57.366 * Looking for test storage... 00:15:57.366 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:15:57.366 00:20:11 ftl.ftl_bdevperf -- ftl/bdevperf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:15:57.366 00:20:11 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 00:15:57.366 00:20:11 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:15:57.366 00:20:11 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:15:57.366 00:20:11 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:15:57.366 00:20:11 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:15:57.366 00:20:11 ftl.ftl_bdevperf -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:57.366 00:20:11 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:15:57.366 00:20:11 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:15:57.367 00:20:11 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:57.367 00:20:11 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:57.367 00:20:11 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:15:57.367 00:20:11 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:15:57.367 00:20:11 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:15:57.367 00:20:11 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:15:57.367 00:20:11 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:15:57.367 00:20:11 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:15:57.367 00:20:11 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:57.367 00:20:11 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:57.367 00:20:11 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:15:57.367 00:20:11 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:15:57.367 00:20:11 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:15:57.367 00:20:11 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:15:57.367 00:20:11 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:15:57.367 00:20:11 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:15:57.367 00:20:11 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:15:57.367 00:20:11 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # spdk_ini_pid= 00:15:57.367 00:20:11 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:57.367 00:20:11 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:57.367 00:20:11 ftl.ftl_bdevperf -- ftl/bdevperf.sh@11 -- # device=0000:00:11.0 00:15:57.367 00:20:11 ftl.ftl_bdevperf -- ftl/bdevperf.sh@12 -- # cache_device=0000:00:10.0 00:15:57.367 00:20:11 ftl.ftl_bdevperf -- ftl/bdevperf.sh@13 -- # use_append= 00:15:57.367 00:20:11 ftl.ftl_bdevperf -- ftl/bdevperf.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:57.367 00:20:11 ftl.ftl_bdevperf -- ftl/bdevperf.sh@15 -- # timeout=240 00:15:57.367 00:20:11 ftl.ftl_bdevperf -- ftl/bdevperf.sh@17 -- # timing_enter '/home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0' 00:15:57.367 00:20:11 ftl.ftl_bdevperf -- common/autotest_common.sh@720 -- # xtrace_disable 00:15:57.367 00:20:11 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:15:57.367 00:20:11 ftl.ftl_bdevperf -- ftl/bdevperf.sh@19 -- # bdevperf_pid=88383 00:15:57.367 00:20:11 ftl.ftl_bdevperf -- ftl/bdevperf.sh@21 -- # trap 'killprocess $bdevperf_pid; exit 1' SIGINT SIGTERM EXIT 00:15:57.367 00:20:11 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # waitforlisten 88383 00:15:57.367 00:20:11 ftl.ftl_bdevperf -- ftl/bdevperf.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0 00:15:57.367 00:20:11 ftl.ftl_bdevperf -- common/autotest_common.sh@827 -- # '[' -z 88383 ']' 00:15:57.367 00:20:11 ftl.ftl_bdevperf -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:57.367 00:20:11 ftl.ftl_bdevperf -- common/autotest_common.sh@832 -- # local max_retries=100 00:15:57.367 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:57.367 00:20:11 ftl.ftl_bdevperf -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:57.367 00:20:11 ftl.ftl_bdevperf -- common/autotest_common.sh@836 -- # xtrace_disable 00:15:57.367 00:20:11 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:15:57.367 [2024-07-23 00:20:11.964392] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:15:57.367 [2024-07-23 00:20:11.964575] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88383 ] 00:15:57.626 [2024-07-23 00:20:12.117781] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:57.626 [2024-07-23 00:20:12.166728] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:58.194 00:20:12 ftl.ftl_bdevperf -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:15:58.194 00:20:12 ftl.ftl_bdevperf -- common/autotest_common.sh@860 -- # return 0 00:15:58.194 00:20:12 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:15:58.194 00:20:12 ftl.ftl_bdevperf -- ftl/common.sh@54 -- # local name=nvme0 00:15:58.194 00:20:12 ftl.ftl_bdevperf -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:15:58.194 00:20:12 ftl.ftl_bdevperf -- ftl/common.sh@56 -- # local size=103424 00:15:58.194 00:20:12 ftl.ftl_bdevperf -- ftl/common.sh@59 -- # local base_bdev 00:15:58.194 00:20:12 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:15:58.453 00:20:13 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:15:58.453 00:20:13 ftl.ftl_bdevperf -- ftl/common.sh@62 -- # local base_size 00:15:58.453 00:20:13 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:15:58.453 00:20:13 ftl.ftl_bdevperf -- common/autotest_common.sh@1374 -- # local bdev_name=nvme0n1 00:15:58.453 00:20:13 ftl.ftl_bdevperf -- common/autotest_common.sh@1375 -- # local bdev_info 00:15:58.453 00:20:13 ftl.ftl_bdevperf -- common/autotest_common.sh@1376 -- # local bs 00:15:58.453 00:20:13 ftl.ftl_bdevperf -- common/autotest_common.sh@1377 -- # local nb 00:15:58.453 00:20:13 ftl.ftl_bdevperf -- common/autotest_common.sh@1378 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:15:58.720 00:20:13 ftl.ftl_bdevperf -- common/autotest_common.sh@1378 -- # bdev_info='[ 00:15:58.720 { 00:15:58.720 "name": "nvme0n1", 00:15:58.720 "aliases": [ 00:15:58.720 "41790b2d-10c1-4025-9104-3bfdedee972a" 00:15:58.720 ], 00:15:58.720 "product_name": "NVMe disk", 00:15:58.720 "block_size": 4096, 00:15:58.720 "num_blocks": 1310720, 00:15:58.720 "uuid": "41790b2d-10c1-4025-9104-3bfdedee972a", 00:15:58.720 "assigned_rate_limits": { 00:15:58.720 "rw_ios_per_sec": 0, 00:15:58.720 "rw_mbytes_per_sec": 0, 00:15:58.720 "r_mbytes_per_sec": 0, 00:15:58.720 "w_mbytes_per_sec": 0 00:15:58.720 }, 00:15:58.720 "claimed": true, 00:15:58.720 "claim_type": "read_many_write_one", 00:15:58.720 "zoned": false, 00:15:58.720 "supported_io_types": { 00:15:58.720 "read": true, 00:15:58.720 "write": true, 00:15:58.720 "unmap": true, 00:15:58.720 "write_zeroes": true, 00:15:58.720 "flush": true, 00:15:58.720 "reset": true, 00:15:58.720 "compare": true, 00:15:58.720 "compare_and_write": false, 00:15:58.720 "abort": true, 00:15:58.720 "nvme_admin": true, 00:15:58.720 "nvme_io": true 00:15:58.720 }, 00:15:58.720 "driver_specific": { 00:15:58.720 "nvme": [ 00:15:58.720 { 00:15:58.720 "pci_address": "0000:00:11.0", 00:15:58.720 "trid": { 00:15:58.720 "trtype": "PCIe", 00:15:58.720 "traddr": "0000:00:11.0" 00:15:58.720 }, 00:15:58.720 "ctrlr_data": { 00:15:58.720 "cntlid": 0, 00:15:58.720 "vendor_id": "0x1b36", 00:15:58.720 "model_number": "QEMU NVMe Ctrl", 00:15:58.720 "serial_number": "12341", 00:15:58.720 "firmware_revision": "8.0.0", 00:15:58.720 "subnqn": "nqn.2019-08.org.qemu:12341", 00:15:58.720 "oacs": { 00:15:58.720 "security": 0, 00:15:58.720 "format": 1, 00:15:58.720 "firmware": 0, 00:15:58.720 "ns_manage": 1 00:15:58.720 }, 00:15:58.720 "multi_ctrlr": false, 00:15:58.720 "ana_reporting": false 00:15:58.720 }, 00:15:58.720 "vs": { 00:15:58.720 "nvme_version": "1.4" 00:15:58.720 }, 00:15:58.720 "ns_data": { 00:15:58.720 "id": 1, 00:15:58.720 "can_share": false 00:15:58.720 } 00:15:58.720 } 00:15:58.720 ], 00:15:58.720 "mp_policy": "active_passive" 00:15:58.720 } 00:15:58.720 } 00:15:58.720 ]' 00:15:58.720 00:20:13 ftl.ftl_bdevperf -- common/autotest_common.sh@1379 -- # jq '.[] .block_size' 00:15:58.720 00:20:13 ftl.ftl_bdevperf -- common/autotest_common.sh@1379 -- # bs=4096 00:15:58.720 00:20:13 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # jq '.[] .num_blocks' 00:15:58.720 00:20:13 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # nb=1310720 00:15:58.720 00:20:13 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # bdev_size=5120 00:15:58.720 00:20:13 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # echo 5120 00:15:58.720 00:20:13 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # base_size=5120 00:15:58.720 00:20:13 ftl.ftl_bdevperf -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:15:58.720 00:20:13 ftl.ftl_bdevperf -- ftl/common.sh@67 -- # clear_lvols 00:15:58.720 00:20:13 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:15:58.720 00:20:13 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:15:58.982 00:20:13 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # stores=ea553ff1-d1c8-447b-ae65-acf19ec88f27 00:15:58.982 00:20:13 ftl.ftl_bdevperf -- ftl/common.sh@29 -- # for lvs in $stores 00:15:58.982 00:20:13 ftl.ftl_bdevperf -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u ea553ff1-d1c8-447b-ae65-acf19ec88f27 00:15:59.241 00:20:13 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:15:59.241 00:20:13 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # lvs=e4bb2305-7407-4651-9294-216739fdb3f6 00:15:59.241 00:20:13 ftl.ftl_bdevperf -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u e4bb2305-7407-4651-9294-216739fdb3f6 00:15:59.499 00:20:14 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # split_bdev=8d957205-0213-45cd-aa34-9d4cfe4f07fa 00:15:59.499 00:20:14 ftl.ftl_bdevperf -- ftl/bdevperf.sh@24 -- # create_nv_cache_bdev nvc0 0000:00:10.0 8d957205-0213-45cd-aa34-9d4cfe4f07fa 00:15:59.499 00:20:14 ftl.ftl_bdevperf -- ftl/common.sh@35 -- # local name=nvc0 00:15:59.499 00:20:14 ftl.ftl_bdevperf -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:15:59.499 00:20:14 ftl.ftl_bdevperf -- ftl/common.sh@37 -- # local base_bdev=8d957205-0213-45cd-aa34-9d4cfe4f07fa 00:15:59.499 00:20:14 ftl.ftl_bdevperf -- ftl/common.sh@38 -- # local cache_size= 00:15:59.499 00:20:14 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # get_bdev_size 8d957205-0213-45cd-aa34-9d4cfe4f07fa 00:15:59.499 00:20:14 ftl.ftl_bdevperf -- common/autotest_common.sh@1374 -- # local bdev_name=8d957205-0213-45cd-aa34-9d4cfe4f07fa 00:15:59.499 00:20:14 ftl.ftl_bdevperf -- common/autotest_common.sh@1375 -- # local bdev_info 00:15:59.499 00:20:14 ftl.ftl_bdevperf -- common/autotest_common.sh@1376 -- # local bs 00:15:59.499 00:20:14 ftl.ftl_bdevperf -- common/autotest_common.sh@1377 -- # local nb 00:15:59.499 00:20:14 ftl.ftl_bdevperf -- common/autotest_common.sh@1378 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 8d957205-0213-45cd-aa34-9d4cfe4f07fa 00:15:59.758 00:20:14 ftl.ftl_bdevperf -- common/autotest_common.sh@1378 -- # bdev_info='[ 00:15:59.758 { 00:15:59.758 "name": "8d957205-0213-45cd-aa34-9d4cfe4f07fa", 00:15:59.758 "aliases": [ 00:15:59.758 "lvs/nvme0n1p0" 00:15:59.758 ], 00:15:59.758 "product_name": "Logical Volume", 00:15:59.758 "block_size": 4096, 00:15:59.758 "num_blocks": 26476544, 00:15:59.758 "uuid": "8d957205-0213-45cd-aa34-9d4cfe4f07fa", 00:15:59.758 "assigned_rate_limits": { 00:15:59.758 "rw_ios_per_sec": 0, 00:15:59.758 "rw_mbytes_per_sec": 0, 00:15:59.758 "r_mbytes_per_sec": 0, 00:15:59.758 "w_mbytes_per_sec": 0 00:15:59.758 }, 00:15:59.758 "claimed": false, 00:15:59.758 "zoned": false, 00:15:59.758 "supported_io_types": { 00:15:59.758 "read": true, 00:15:59.758 "write": true, 00:15:59.758 "unmap": true, 00:15:59.758 "write_zeroes": true, 00:15:59.758 "flush": false, 00:15:59.758 "reset": true, 00:15:59.758 "compare": false, 00:15:59.758 "compare_and_write": false, 00:15:59.758 "abort": false, 00:15:59.758 "nvme_admin": false, 00:15:59.758 "nvme_io": false 00:15:59.758 }, 00:15:59.758 "driver_specific": { 00:15:59.758 "lvol": { 00:15:59.758 "lvol_store_uuid": "e4bb2305-7407-4651-9294-216739fdb3f6", 00:15:59.758 "base_bdev": "nvme0n1", 00:15:59.758 "thin_provision": true, 00:15:59.758 "num_allocated_clusters": 0, 00:15:59.758 "snapshot": false, 00:15:59.758 "clone": false, 00:15:59.758 "esnap_clone": false 00:15:59.758 } 00:15:59.758 } 00:15:59.758 } 00:15:59.758 ]' 00:15:59.758 00:20:14 ftl.ftl_bdevperf -- common/autotest_common.sh@1379 -- # jq '.[] .block_size' 00:15:59.758 00:20:14 ftl.ftl_bdevperf -- common/autotest_common.sh@1379 -- # bs=4096 00:15:59.758 00:20:14 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # jq '.[] .num_blocks' 00:15:59.758 00:20:14 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # nb=26476544 00:15:59.758 00:20:14 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # bdev_size=103424 00:15:59.758 00:20:14 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # echo 103424 00:15:59.758 00:20:14 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # local base_size=5171 00:15:59.758 00:20:14 ftl.ftl_bdevperf -- ftl/common.sh@44 -- # local nvc_bdev 00:15:59.758 00:20:14 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:16:00.017 00:20:14 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:16:00.017 00:20:14 ftl.ftl_bdevperf -- ftl/common.sh@47 -- # [[ -z '' ]] 00:16:00.017 00:20:14 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # get_bdev_size 8d957205-0213-45cd-aa34-9d4cfe4f07fa 00:16:00.017 00:20:14 ftl.ftl_bdevperf -- common/autotest_common.sh@1374 -- # local bdev_name=8d957205-0213-45cd-aa34-9d4cfe4f07fa 00:16:00.017 00:20:14 ftl.ftl_bdevperf -- common/autotest_common.sh@1375 -- # local bdev_info 00:16:00.017 00:20:14 ftl.ftl_bdevperf -- common/autotest_common.sh@1376 -- # local bs 00:16:00.017 00:20:14 ftl.ftl_bdevperf -- common/autotest_common.sh@1377 -- # local nb 00:16:00.017 00:20:14 ftl.ftl_bdevperf -- common/autotest_common.sh@1378 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 8d957205-0213-45cd-aa34-9d4cfe4f07fa 00:16:00.275 00:20:14 ftl.ftl_bdevperf -- common/autotest_common.sh@1378 -- # bdev_info='[ 00:16:00.275 { 00:16:00.275 "name": "8d957205-0213-45cd-aa34-9d4cfe4f07fa", 00:16:00.275 "aliases": [ 00:16:00.275 "lvs/nvme0n1p0" 00:16:00.275 ], 00:16:00.275 "product_name": "Logical Volume", 00:16:00.275 "block_size": 4096, 00:16:00.275 "num_blocks": 26476544, 00:16:00.275 "uuid": "8d957205-0213-45cd-aa34-9d4cfe4f07fa", 00:16:00.275 "assigned_rate_limits": { 00:16:00.275 "rw_ios_per_sec": 0, 00:16:00.275 "rw_mbytes_per_sec": 0, 00:16:00.275 "r_mbytes_per_sec": 0, 00:16:00.275 "w_mbytes_per_sec": 0 00:16:00.275 }, 00:16:00.275 "claimed": false, 00:16:00.275 "zoned": false, 00:16:00.275 "supported_io_types": { 00:16:00.275 "read": true, 00:16:00.275 "write": true, 00:16:00.275 "unmap": true, 00:16:00.275 "write_zeroes": true, 00:16:00.275 "flush": false, 00:16:00.275 "reset": true, 00:16:00.275 "compare": false, 00:16:00.275 "compare_and_write": false, 00:16:00.275 "abort": false, 00:16:00.275 "nvme_admin": false, 00:16:00.275 "nvme_io": false 00:16:00.275 }, 00:16:00.275 "driver_specific": { 00:16:00.275 "lvol": { 00:16:00.275 "lvol_store_uuid": "e4bb2305-7407-4651-9294-216739fdb3f6", 00:16:00.275 "base_bdev": "nvme0n1", 00:16:00.275 "thin_provision": true, 00:16:00.275 "num_allocated_clusters": 0, 00:16:00.275 "snapshot": false, 00:16:00.275 "clone": false, 00:16:00.275 "esnap_clone": false 00:16:00.275 } 00:16:00.275 } 00:16:00.275 } 00:16:00.275 ]' 00:16:00.275 00:20:14 ftl.ftl_bdevperf -- common/autotest_common.sh@1379 -- # jq '.[] .block_size' 00:16:00.275 00:20:14 ftl.ftl_bdevperf -- common/autotest_common.sh@1379 -- # bs=4096 00:16:00.275 00:20:14 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # jq '.[] .num_blocks' 00:16:00.275 00:20:14 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # nb=26476544 00:16:00.275 00:20:14 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # bdev_size=103424 00:16:00.275 00:20:14 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # echo 103424 00:16:00.275 00:20:14 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # cache_size=5171 00:16:00.275 00:20:14 ftl.ftl_bdevperf -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:16:00.534 00:20:15 ftl.ftl_bdevperf -- ftl/bdevperf.sh@24 -- # nv_cache=nvc0n1p0 00:16:00.534 00:20:15 ftl.ftl_bdevperf -- ftl/bdevperf.sh@26 -- # get_bdev_size 8d957205-0213-45cd-aa34-9d4cfe4f07fa 00:16:00.534 00:20:15 ftl.ftl_bdevperf -- common/autotest_common.sh@1374 -- # local bdev_name=8d957205-0213-45cd-aa34-9d4cfe4f07fa 00:16:00.534 00:20:15 ftl.ftl_bdevperf -- common/autotest_common.sh@1375 -- # local bdev_info 00:16:00.534 00:20:15 ftl.ftl_bdevperf -- common/autotest_common.sh@1376 -- # local bs 00:16:00.534 00:20:15 ftl.ftl_bdevperf -- common/autotest_common.sh@1377 -- # local nb 00:16:00.534 00:20:15 ftl.ftl_bdevperf -- common/autotest_common.sh@1378 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 8d957205-0213-45cd-aa34-9d4cfe4f07fa 00:16:00.794 00:20:15 ftl.ftl_bdevperf -- common/autotest_common.sh@1378 -- # bdev_info='[ 00:16:00.794 { 00:16:00.794 "name": "8d957205-0213-45cd-aa34-9d4cfe4f07fa", 00:16:00.794 "aliases": [ 00:16:00.794 "lvs/nvme0n1p0" 00:16:00.794 ], 00:16:00.794 "product_name": "Logical Volume", 00:16:00.794 "block_size": 4096, 00:16:00.794 "num_blocks": 26476544, 00:16:00.794 "uuid": "8d957205-0213-45cd-aa34-9d4cfe4f07fa", 00:16:00.794 "assigned_rate_limits": { 00:16:00.794 "rw_ios_per_sec": 0, 00:16:00.794 "rw_mbytes_per_sec": 0, 00:16:00.794 "r_mbytes_per_sec": 0, 00:16:00.794 "w_mbytes_per_sec": 0 00:16:00.794 }, 00:16:00.794 "claimed": false, 00:16:00.794 "zoned": false, 00:16:00.794 "supported_io_types": { 00:16:00.794 "read": true, 00:16:00.794 "write": true, 00:16:00.794 "unmap": true, 00:16:00.794 "write_zeroes": true, 00:16:00.794 "flush": false, 00:16:00.794 "reset": true, 00:16:00.794 "compare": false, 00:16:00.794 "compare_and_write": false, 00:16:00.794 "abort": false, 00:16:00.794 "nvme_admin": false, 00:16:00.794 "nvme_io": false 00:16:00.794 }, 00:16:00.794 "driver_specific": { 00:16:00.794 "lvol": { 00:16:00.794 "lvol_store_uuid": "e4bb2305-7407-4651-9294-216739fdb3f6", 00:16:00.794 "base_bdev": "nvme0n1", 00:16:00.794 "thin_provision": true, 00:16:00.794 "num_allocated_clusters": 0, 00:16:00.794 "snapshot": false, 00:16:00.794 "clone": false, 00:16:00.794 "esnap_clone": false 00:16:00.794 } 00:16:00.794 } 00:16:00.794 } 00:16:00.794 ]' 00:16:00.794 00:20:15 ftl.ftl_bdevperf -- common/autotest_common.sh@1379 -- # jq '.[] .block_size' 00:16:00.794 00:20:15 ftl.ftl_bdevperf -- common/autotest_common.sh@1379 -- # bs=4096 00:16:00.794 00:20:15 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # jq '.[] .num_blocks' 00:16:00.794 00:20:15 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # nb=26476544 00:16:00.794 00:20:15 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # bdev_size=103424 00:16:00.794 00:20:15 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # echo 103424 00:16:00.794 00:20:15 ftl.ftl_bdevperf -- ftl/bdevperf.sh@26 -- # l2p_dram_size_mb=20 00:16:00.794 00:20:15 ftl.ftl_bdevperf -- ftl/bdevperf.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 8d957205-0213-45cd-aa34-9d4cfe4f07fa -c nvc0n1p0 --l2p_dram_limit 20 00:16:01.055 [2024-07-23 00:20:15.610714] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:01.055 [2024-07-23 00:20:15.610771] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:16:01.055 [2024-07-23 00:20:15.610788] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:16:01.055 [2024-07-23 00:20:15.610801] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:01.055 [2024-07-23 00:20:15.610861] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:01.055 [2024-07-23 00:20:15.610885] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:01.055 [2024-07-23 00:20:15.610896] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:16:01.055 [2024-07-23 00:20:15.610915] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:01.055 [2024-07-23 00:20:15.610941] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:16:01.055 [2024-07-23 00:20:15.611211] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:16:01.055 [2024-07-23 00:20:15.611235] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:01.055 [2024-07-23 00:20:15.611252] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:01.055 [2024-07-23 00:20:15.611279] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.301 ms 00:16:01.055 [2024-07-23 00:20:15.611292] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:01.055 [2024-07-23 00:20:15.611365] mngt/ftl_mngt_md.c: 568:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 970ea330-cc61-4ff3-ad0d-1c15f97ef82c 00:16:01.055 [2024-07-23 00:20:15.612723] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:01.055 [2024-07-23 00:20:15.612748] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:16:01.055 [2024-07-23 00:20:15.612762] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:16:01.055 [2024-07-23 00:20:15.612782] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:01.055 [2024-07-23 00:20:15.620130] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:01.055 [2024-07-23 00:20:15.620167] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:01.055 [2024-07-23 00:20:15.620184] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.304 ms 00:16:01.055 [2024-07-23 00:20:15.620195] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:01.055 [2024-07-23 00:20:15.620290] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:01.055 [2024-07-23 00:20:15.620308] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:01.055 [2024-07-23 00:20:15.620321] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:16:01.055 [2024-07-23 00:20:15.620338] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:01.055 [2024-07-23 00:20:15.620397] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:01.055 [2024-07-23 00:20:15.620410] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:16:01.055 [2024-07-23 00:20:15.620423] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:16:01.055 [2024-07-23 00:20:15.620433] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:01.055 [2024-07-23 00:20:15.620459] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:16:01.055 [2024-07-23 00:20:15.622235] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:01.055 [2024-07-23 00:20:15.622282] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:01.055 [2024-07-23 00:20:15.622294] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.788 ms 00:16:01.055 [2024-07-23 00:20:15.622307] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:01.055 [2024-07-23 00:20:15.622341] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:01.055 [2024-07-23 00:20:15.622355] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:16:01.055 [2024-07-23 00:20:15.622366] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:16:01.055 [2024-07-23 00:20:15.622381] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:01.055 [2024-07-23 00:20:15.622398] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:16:01.055 [2024-07-23 00:20:15.622538] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:16:01.055 [2024-07-23 00:20:15.622552] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:16:01.055 [2024-07-23 00:20:15.622567] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:16:01.055 [2024-07-23 00:20:15.622581] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:16:01.055 [2024-07-23 00:20:15.622603] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:16:01.055 [2024-07-23 00:20:15.622615] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:16:01.056 [2024-07-23 00:20:15.622636] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:16:01.056 [2024-07-23 00:20:15.622655] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:16:01.056 [2024-07-23 00:20:15.622674] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:16:01.056 [2024-07-23 00:20:15.622685] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:01.056 [2024-07-23 00:20:15.622698] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:16:01.056 [2024-07-23 00:20:15.622708] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.289 ms 00:16:01.056 [2024-07-23 00:20:15.622721] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:01.056 [2024-07-23 00:20:15.622789] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:01.056 [2024-07-23 00:20:15.622804] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:16:01.056 [2024-07-23 00:20:15.622815] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:16:01.056 [2024-07-23 00:20:15.622831] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:01.056 [2024-07-23 00:20:15.622920] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:16:01.056 [2024-07-23 00:20:15.622935] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:16:01.056 [2024-07-23 00:20:15.622953] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:01.056 [2024-07-23 00:20:15.622966] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:01.056 [2024-07-23 00:20:15.622980] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:16:01.056 [2024-07-23 00:20:15.622992] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:16:01.056 [2024-07-23 00:20:15.623001] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:16:01.056 [2024-07-23 00:20:15.623013] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:16:01.056 [2024-07-23 00:20:15.623022] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:16:01.056 [2024-07-23 00:20:15.623034] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:01.056 [2024-07-23 00:20:15.623044] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:16:01.056 [2024-07-23 00:20:15.623056] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:16:01.056 [2024-07-23 00:20:15.623066] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:01.056 [2024-07-23 00:20:15.623080] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:16:01.056 [2024-07-23 00:20:15.623090] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:16:01.056 [2024-07-23 00:20:15.623101] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:01.056 [2024-07-23 00:20:15.623113] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:16:01.056 [2024-07-23 00:20:15.623125] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:16:01.056 [2024-07-23 00:20:15.623134] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:01.056 [2024-07-23 00:20:15.623147] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:16:01.056 [2024-07-23 00:20:15.623159] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:16:01.056 [2024-07-23 00:20:15.623170] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:01.056 [2024-07-23 00:20:15.623180] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:16:01.056 [2024-07-23 00:20:15.623191] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:16:01.056 [2024-07-23 00:20:15.623200] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:01.056 [2024-07-23 00:20:15.623212] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:16:01.056 [2024-07-23 00:20:15.623221] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:16:01.056 [2024-07-23 00:20:15.623233] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:01.056 [2024-07-23 00:20:15.623243] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:16:01.056 [2024-07-23 00:20:15.623257] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:16:01.056 [2024-07-23 00:20:15.623277] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:01.056 [2024-07-23 00:20:15.623288] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:16:01.056 [2024-07-23 00:20:15.623297] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:16:01.056 [2024-07-23 00:20:15.623309] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:01.056 [2024-07-23 00:20:15.623318] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:16:01.056 [2024-07-23 00:20:15.623330] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:16:01.056 [2024-07-23 00:20:15.623342] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:01.056 [2024-07-23 00:20:15.623353] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:16:01.056 [2024-07-23 00:20:15.623363] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:16:01.056 [2024-07-23 00:20:15.623375] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:01.056 [2024-07-23 00:20:15.623384] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:16:01.056 [2024-07-23 00:20:15.623396] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:16:01.056 [2024-07-23 00:20:15.623405] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:01.056 [2024-07-23 00:20:15.623418] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:16:01.056 [2024-07-23 00:20:15.623428] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:16:01.056 [2024-07-23 00:20:15.623442] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:01.056 [2024-07-23 00:20:15.623452] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:01.056 [2024-07-23 00:20:15.623472] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:16:01.056 [2024-07-23 00:20:15.623482] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:16:01.056 [2024-07-23 00:20:15.623494] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:16:01.056 [2024-07-23 00:20:15.623504] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:16:01.056 [2024-07-23 00:20:15.623515] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:16:01.056 [2024-07-23 00:20:15.623527] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:16:01.056 [2024-07-23 00:20:15.623543] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:16:01.056 [2024-07-23 00:20:15.623556] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:01.056 [2024-07-23 00:20:15.623573] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:16:01.056 [2024-07-23 00:20:15.623583] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:16:01.056 [2024-07-23 00:20:15.623596] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:16:01.056 [2024-07-23 00:20:15.623606] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:16:01.056 [2024-07-23 00:20:15.623619] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:16:01.056 [2024-07-23 00:20:15.623630] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:16:01.056 [2024-07-23 00:20:15.623647] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:16:01.056 [2024-07-23 00:20:15.623657] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:16:01.056 [2024-07-23 00:20:15.623670] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:16:01.056 [2024-07-23 00:20:15.623681] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:16:01.056 [2024-07-23 00:20:15.623694] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:16:01.056 [2024-07-23 00:20:15.623704] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:16:01.056 [2024-07-23 00:20:15.623717] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:16:01.056 [2024-07-23 00:20:15.623730] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:16:01.056 [2024-07-23 00:20:15.623743] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:16:01.056 [2024-07-23 00:20:15.623754] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:01.056 [2024-07-23 00:20:15.623776] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:16:01.056 [2024-07-23 00:20:15.623787] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:16:01.056 [2024-07-23 00:20:15.623800] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:16:01.056 [2024-07-23 00:20:15.623812] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:16:01.056 [2024-07-23 00:20:15.623826] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:01.056 [2024-07-23 00:20:15.623837] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:16:01.056 [2024-07-23 00:20:15.623852] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.956 ms 00:16:01.057 [2024-07-23 00:20:15.623863] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:01.057 [2024-07-23 00:20:15.623902] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:16:01.057 [2024-07-23 00:20:15.623915] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:16:04.352 [2024-07-23 00:20:19.026611] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:04.352 [2024-07-23 00:20:19.026682] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:16:04.352 [2024-07-23 00:20:19.026702] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3408.212 ms 00:16:04.352 [2024-07-23 00:20:19.026730] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:04.611 [2024-07-23 00:20:19.045313] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:04.611 [2024-07-23 00:20:19.045368] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:04.611 [2024-07-23 00:20:19.045387] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.511 ms 00:16:04.611 [2024-07-23 00:20:19.045398] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:04.611 [2024-07-23 00:20:19.045529] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:04.611 [2024-07-23 00:20:19.045558] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:16:04.611 [2024-07-23 00:20:19.045572] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:16:04.611 [2024-07-23 00:20:19.045582] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:04.611 [2024-07-23 00:20:19.056701] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:04.612 [2024-07-23 00:20:19.056756] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:04.612 [2024-07-23 00:20:19.056786] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.082 ms 00:16:04.612 [2024-07-23 00:20:19.056800] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:04.612 [2024-07-23 00:20:19.056847] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:04.612 [2024-07-23 00:20:19.056864] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:04.612 [2024-07-23 00:20:19.056881] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:16:04.612 [2024-07-23 00:20:19.056895] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:04.612 [2024-07-23 00:20:19.057451] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:04.612 [2024-07-23 00:20:19.057466] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:04.612 [2024-07-23 00:20:19.057480] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.488 ms 00:16:04.612 [2024-07-23 00:20:19.057491] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:04.612 [2024-07-23 00:20:19.057604] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:04.612 [2024-07-23 00:20:19.057620] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:04.612 [2024-07-23 00:20:19.057635] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.086 ms 00:16:04.612 [2024-07-23 00:20:19.057645] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:04.612 [2024-07-23 00:20:19.063783] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:04.612 [2024-07-23 00:20:19.063821] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:04.612 [2024-07-23 00:20:19.063838] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.125 ms 00:16:04.612 [2024-07-23 00:20:19.063864] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:04.612 [2024-07-23 00:20:19.071734] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 19 (of 20) MiB 00:16:04.612 [2024-07-23 00:20:19.077788] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:04.612 [2024-07-23 00:20:19.077945] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:16:04.612 [2024-07-23 00:20:19.078078] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.858 ms 00:16:04.612 [2024-07-23 00:20:19.078122] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:04.612 [2024-07-23 00:20:19.148289] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:04.612 [2024-07-23 00:20:19.148510] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:16:04.612 [2024-07-23 00:20:19.148534] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 70.222 ms 00:16:04.612 [2024-07-23 00:20:19.148555] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:04.612 [2024-07-23 00:20:19.148741] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:04.612 [2024-07-23 00:20:19.148758] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:16:04.612 [2024-07-23 00:20:19.148770] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.141 ms 00:16:04.612 [2024-07-23 00:20:19.148782] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:04.612 [2024-07-23 00:20:19.152504] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:04.612 [2024-07-23 00:20:19.152546] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:16:04.612 [2024-07-23 00:20:19.152559] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.684 ms 00:16:04.612 [2024-07-23 00:20:19.152575] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:04.612 [2024-07-23 00:20:19.155427] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:04.612 [2024-07-23 00:20:19.155466] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:16:04.612 [2024-07-23 00:20:19.155480] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.821 ms 00:16:04.612 [2024-07-23 00:20:19.155493] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:04.612 [2024-07-23 00:20:19.155761] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:04.612 [2024-07-23 00:20:19.155779] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:16:04.612 [2024-07-23 00:20:19.155790] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.237 ms 00:16:04.612 [2024-07-23 00:20:19.155814] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:04.612 [2024-07-23 00:20:19.195043] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:04.612 [2024-07-23 00:20:19.195120] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:16:04.612 [2024-07-23 00:20:19.195137] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.248 ms 00:16:04.612 [2024-07-23 00:20:19.195155] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:04.612 [2024-07-23 00:20:19.200162] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:04.612 [2024-07-23 00:20:19.200231] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:16:04.612 [2024-07-23 00:20:19.200263] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.965 ms 00:16:04.612 [2024-07-23 00:20:19.200284] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:04.612 [2024-07-23 00:20:19.203926] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:04.612 [2024-07-23 00:20:19.203974] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:16:04.612 [2024-07-23 00:20:19.203989] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.608 ms 00:16:04.612 [2024-07-23 00:20:19.204002] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:04.612 [2024-07-23 00:20:19.207785] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:04.612 [2024-07-23 00:20:19.207830] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:16:04.612 [2024-07-23 00:20:19.207844] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.752 ms 00:16:04.612 [2024-07-23 00:20:19.207861] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:04.612 [2024-07-23 00:20:19.207902] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:04.612 [2024-07-23 00:20:19.207917] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:16:04.612 [2024-07-23 00:20:19.207929] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:16:04.612 [2024-07-23 00:20:19.207942] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:04.612 [2024-07-23 00:20:19.208007] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:04.612 [2024-07-23 00:20:19.208021] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:16:04.612 [2024-07-23 00:20:19.208032] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:16:04.612 [2024-07-23 00:20:19.208044] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:04.612 [2024-07-23 00:20:19.209132] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3603.845 ms, result 0 00:16:04.612 { 00:16:04.612 "name": "ftl0", 00:16:04.612 "uuid": "970ea330-cc61-4ff3-ad0d-1c15f97ef82c" 00:16:04.612 } 00:16:04.612 00:20:19 ftl.ftl_bdevperf -- ftl/bdevperf.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_stats -b ftl0 00:16:04.612 00:20:19 ftl.ftl_bdevperf -- ftl/bdevperf.sh@29 -- # jq -r .name 00:16:04.612 00:20:19 ftl.ftl_bdevperf -- ftl/bdevperf.sh@29 -- # grep -qw ftl0 00:16:04.871 00:20:19 ftl.ftl_bdevperf -- ftl/bdevperf.sh@31 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 1 -w randwrite -t 4 -o 69632 00:16:04.871 [2024-07-23 00:20:19.523401] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:16:04.871 I/O size of 69632 is greater than zero copy threshold (65536). 00:16:04.871 Zero copy mechanism will not be used. 00:16:04.871 Running I/O for 4 seconds... 00:16:09.062 00:16:09.062 Latency(us) 00:16:09.062 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:09.062 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 1, IO size: 69632) 00:16:09.062 ftl0 : 4.00 1603.98 106.51 0.00 0.00 653.67 245.10 4974.42 00:16:09.062 =================================================================================================================== 00:16:09.062 Total : 1603.98 106.51 0.00 0.00 653.67 245.10 4974.42 00:16:09.062 0 00:16:09.062 [2024-07-23 00:20:23.523551] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:16:09.062 00:20:23 ftl.ftl_bdevperf -- ftl/bdevperf.sh@32 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w randwrite -t 4 -o 4096 00:16:09.062 [2024-07-23 00:20:23.632033] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:16:09.062 Running I/O for 4 seconds... 00:16:13.245 00:16:13.245 Latency(us) 00:16:13.245 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:13.245 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 128, IO size: 4096) 00:16:13.245 ftl0 : 4.01 11364.23 44.39 0.00 0.00 11241.24 230.30 33057.52 00:16:13.245 =================================================================================================================== 00:16:13.245 Total : 11364.23 44.39 0.00 0.00 11241.24 0.00 33057.52 00:16:13.245 0 00:16:13.245 [2024-07-23 00:20:27.645004] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:16:13.245 00:20:27 ftl.ftl_bdevperf -- ftl/bdevperf.sh@33 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w verify -t 4 -o 4096 00:16:13.245 [2024-07-23 00:20:27.757296] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:16:13.245 Running I/O for 4 seconds... 00:16:17.433 00:16:17.433 Latency(us) 00:16:17.433 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:17.433 Job: ftl0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:16:17.433 Verification LBA range: start 0x0 length 0x1400000 00:16:17.433 ftl0 : 4.01 9175.27 35.84 0.00 0.00 13908.81 250.04 20950.46 00:16:17.433 =================================================================================================================== 00:16:17.433 Total : 9175.27 35.84 0.00 0.00 13908.81 0.00 20950.46 00:16:17.433 [2024-07-23 00:20:31.765669] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:16:17.433 0 00:16:17.433 00:20:31 ftl.ftl_bdevperf -- ftl/bdevperf.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_delete -b ftl0 00:16:17.433 [2024-07-23 00:20:31.941661] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:17.433 [2024-07-23 00:20:31.941717] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:16:17.433 [2024-07-23 00:20:31.941734] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:16:17.433 [2024-07-23 00:20:31.941747] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:17.433 [2024-07-23 00:20:31.941775] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:16:17.433 [2024-07-23 00:20:31.942461] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:17.433 [2024-07-23 00:20:31.942475] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:16:17.433 [2024-07-23 00:20:31.942497] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.669 ms 00:16:17.433 [2024-07-23 00:20:31.942508] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:17.433 [2024-07-23 00:20:31.944577] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:17.433 [2024-07-23 00:20:31.944617] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:16:17.433 [2024-07-23 00:20:31.944634] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.036 ms 00:16:17.433 [2024-07-23 00:20:31.944645] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:17.694 [2024-07-23 00:20:32.148181] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:17.694 [2024-07-23 00:20:32.148244] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:16:17.694 [2024-07-23 00:20:32.148293] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 203.832 ms 00:16:17.694 [2024-07-23 00:20:32.148322] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:17.694 [2024-07-23 00:20:32.153537] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:17.694 [2024-07-23 00:20:32.153569] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:16:17.694 [2024-07-23 00:20:32.153585] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.180 ms 00:16:17.694 [2024-07-23 00:20:32.153595] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:17.694 [2024-07-23 00:20:32.155338] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:17.694 [2024-07-23 00:20:32.155372] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:16:17.694 [2024-07-23 00:20:32.155387] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.677 ms 00:16:17.694 [2024-07-23 00:20:32.155397] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:17.694 [2024-07-23 00:20:32.159915] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:17.694 [2024-07-23 00:20:32.159955] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:16:17.694 [2024-07-23 00:20:32.159971] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.490 ms 00:16:17.694 [2024-07-23 00:20:32.159982] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:17.694 [2024-07-23 00:20:32.160093] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:17.694 [2024-07-23 00:20:32.160106] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:16:17.694 [2024-07-23 00:20:32.160120] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.073 ms 00:16:17.694 [2024-07-23 00:20:32.160130] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:17.694 [2024-07-23 00:20:32.162206] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:17.694 [2024-07-23 00:20:32.162241] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:16:17.694 [2024-07-23 00:20:32.162256] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.057 ms 00:16:17.694 [2024-07-23 00:20:32.162281] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:17.694 [2024-07-23 00:20:32.163729] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:17.694 [2024-07-23 00:20:32.163762] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:16:17.694 [2024-07-23 00:20:32.163776] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.403 ms 00:16:17.694 [2024-07-23 00:20:32.163786] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:17.694 [2024-07-23 00:20:32.164993] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:17.694 [2024-07-23 00:20:32.165027] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:16:17.694 [2024-07-23 00:20:32.165042] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.175 ms 00:16:17.694 [2024-07-23 00:20:32.165052] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:17.694 [2024-07-23 00:20:32.166300] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:17.694 [2024-07-23 00:20:32.166335] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:16:17.694 [2024-07-23 00:20:32.166349] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.189 ms 00:16:17.694 [2024-07-23 00:20:32.166359] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:17.694 [2024-07-23 00:20:32.166391] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:16:17.694 [2024-07-23 00:20:32.166414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:16:17.694 [2024-07-23 00:20:32.166429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:16:17.694 [2024-07-23 00:20:32.166440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:16:17.694 [2024-07-23 00:20:32.166454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:16:17.694 [2024-07-23 00:20:32.166465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:16:17.694 [2024-07-23 00:20:32.166478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:16:17.694 [2024-07-23 00:20:32.166489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:16:17.694 [2024-07-23 00:20:32.166502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:16:17.694 [2024-07-23 00:20:32.166513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:16:17.694 [2024-07-23 00:20:32.166526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:16:17.694 [2024-07-23 00:20:32.166537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:16:17.694 [2024-07-23 00:20:32.166553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:16:17.694 [2024-07-23 00:20:32.166564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:16:17.694 [2024-07-23 00:20:32.166577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:16:17.694 [2024-07-23 00:20:32.166588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:16:17.694 [2024-07-23 00:20:32.166602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:16:17.694 [2024-07-23 00:20:32.166612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:16:17.694 [2024-07-23 00:20:32.166625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:16:17.694 [2024-07-23 00:20:32.166636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:16:17.694 [2024-07-23 00:20:32.166650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:16:17.694 [2024-07-23 00:20:32.166660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:16:17.694 [2024-07-23 00:20:32.166673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:16:17.694 [2024-07-23 00:20:32.166684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:16:17.694 [2024-07-23 00:20:32.166697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:16:17.694 [2024-07-23 00:20:32.166708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:16:17.694 [2024-07-23 00:20:32.166721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:16:17.694 [2024-07-23 00:20:32.166733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:16:17.694 [2024-07-23 00:20:32.166750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:16:17.694 [2024-07-23 00:20:32.166761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:16:17.694 [2024-07-23 00:20:32.166774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:16:17.694 [2024-07-23 00:20:32.166785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:16:17.694 [2024-07-23 00:20:32.166798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:16:17.694 [2024-07-23 00:20:32.166809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:16:17.694 [2024-07-23 00:20:32.166823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:16:17.694 [2024-07-23 00:20:32.166834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:16:17.694 [2024-07-23 00:20:32.166848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:16:17.694 [2024-07-23 00:20:32.166859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:16:17.694 [2024-07-23 00:20:32.166872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:16:17.694 [2024-07-23 00:20:32.166883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:16:17.694 [2024-07-23 00:20:32.166896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:16:17.694 [2024-07-23 00:20:32.166907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:16:17.694 [2024-07-23 00:20:32.166920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:16:17.694 [2024-07-23 00:20:32.166931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:16:17.694 [2024-07-23 00:20:32.166947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:16:17.694 [2024-07-23 00:20:32.166958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:16:17.694 [2024-07-23 00:20:32.166971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:16:17.694 [2024-07-23 00:20:32.166982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:16:17.694 [2024-07-23 00:20:32.166995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:16:17.694 [2024-07-23 00:20:32.167006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:16:17.694 [2024-07-23 00:20:32.167019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:16:17.694 [2024-07-23 00:20:32.167030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:16:17.694 [2024-07-23 00:20:32.167043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:16:17.694 [2024-07-23 00:20:32.167054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:16:17.694 [2024-07-23 00:20:32.167069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:16:17.694 [2024-07-23 00:20:32.167080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:16:17.694 [2024-07-23 00:20:32.167094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:16:17.695 [2024-07-23 00:20:32.167105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:16:17.695 [2024-07-23 00:20:32.167118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:16:17.695 [2024-07-23 00:20:32.167129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:16:17.695 [2024-07-23 00:20:32.167144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:16:17.695 [2024-07-23 00:20:32.167155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:16:17.695 [2024-07-23 00:20:32.167168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:16:17.695 [2024-07-23 00:20:32.167179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:16:17.695 [2024-07-23 00:20:32.167192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:16:17.695 [2024-07-23 00:20:32.167203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:16:17.695 [2024-07-23 00:20:32.167219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:16:17.695 [2024-07-23 00:20:32.167230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:16:17.695 [2024-07-23 00:20:32.167244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:16:17.695 [2024-07-23 00:20:32.167255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:16:17.695 [2024-07-23 00:20:32.167282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:16:17.695 [2024-07-23 00:20:32.167294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:16:17.695 [2024-07-23 00:20:32.167307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:16:17.695 [2024-07-23 00:20:32.167318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:16:17.695 [2024-07-23 00:20:32.167331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:16:17.695 [2024-07-23 00:20:32.167343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:16:17.695 [2024-07-23 00:20:32.167358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:16:17.695 [2024-07-23 00:20:32.167369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:16:17.695 [2024-07-23 00:20:32.167383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:16:17.695 [2024-07-23 00:20:32.167393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:16:17.695 [2024-07-23 00:20:32.167407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:16:17.695 [2024-07-23 00:20:32.167418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:16:17.695 [2024-07-23 00:20:32.167432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:16:17.695 [2024-07-23 00:20:32.167443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:16:17.695 [2024-07-23 00:20:32.167457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:16:17.695 [2024-07-23 00:20:32.167468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:16:17.695 [2024-07-23 00:20:32.167482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:16:17.695 [2024-07-23 00:20:32.167493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:16:17.695 [2024-07-23 00:20:32.167507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:16:17.695 [2024-07-23 00:20:32.167517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:16:17.695 [2024-07-23 00:20:32.167531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:16:17.695 [2024-07-23 00:20:32.167542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:16:17.695 [2024-07-23 00:20:32.167558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:16:17.695 [2024-07-23 00:20:32.167569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:16:17.695 [2024-07-23 00:20:32.167581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:16:17.695 [2024-07-23 00:20:32.167592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:16:17.695 [2024-07-23 00:20:32.167605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:16:17.695 [2024-07-23 00:20:32.167616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:16:17.695 [2024-07-23 00:20:32.167629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:16:17.695 [2024-07-23 00:20:32.167640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:16:17.695 [2024-07-23 00:20:32.167654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:16:17.695 [2024-07-23 00:20:32.167673] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:16:17.695 [2024-07-23 00:20:32.167685] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 970ea330-cc61-4ff3-ad0d-1c15f97ef82c 00:16:17.695 [2024-07-23 00:20:32.167697] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:16:17.695 [2024-07-23 00:20:32.167710] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:16:17.695 [2024-07-23 00:20:32.167719] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:16:17.695 [2024-07-23 00:20:32.167732] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:16:17.695 [2024-07-23 00:20:32.167742] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:16:17.695 [2024-07-23 00:20:32.167758] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:16:17.695 [2024-07-23 00:20:32.167769] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:16:17.695 [2024-07-23 00:20:32.167780] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:16:17.695 [2024-07-23 00:20:32.167789] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:16:17.695 [2024-07-23 00:20:32.167802] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:17.695 [2024-07-23 00:20:32.167812] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:16:17.695 [2024-07-23 00:20:32.167828] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.415 ms 00:16:17.695 [2024-07-23 00:20:32.167838] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:17.695 [2024-07-23 00:20:32.169579] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:17.695 [2024-07-23 00:20:32.169601] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:16:17.695 [2024-07-23 00:20:32.169615] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.707 ms 00:16:17.695 [2024-07-23 00:20:32.169632] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:17.695 [2024-07-23 00:20:32.169738] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:17.695 [2024-07-23 00:20:32.169752] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:16:17.695 [2024-07-23 00:20:32.169765] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.084 ms 00:16:17.695 [2024-07-23 00:20:32.169775] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:17.695 [2024-07-23 00:20:32.176112] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:17.695 [2024-07-23 00:20:32.176226] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:17.695 [2024-07-23 00:20:32.176313] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:17.695 [2024-07-23 00:20:32.176352] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:17.695 [2024-07-23 00:20:32.176432] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:17.695 [2024-07-23 00:20:32.176467] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:17.695 [2024-07-23 00:20:32.176499] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:17.695 [2024-07-23 00:20:32.176529] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:17.695 [2024-07-23 00:20:32.176643] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:17.695 [2024-07-23 00:20:32.176714] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:17.695 [2024-07-23 00:20:32.176816] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:17.695 [2024-07-23 00:20:32.176847] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:17.695 [2024-07-23 00:20:32.176896] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:17.695 [2024-07-23 00:20:32.176929] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:17.695 [2024-07-23 00:20:32.176965] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:17.695 [2024-07-23 00:20:32.176994] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:17.695 [2024-07-23 00:20:32.189823] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:17.695 [2024-07-23 00:20:32.190027] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:17.695 [2024-07-23 00:20:32.190121] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:17.695 [2024-07-23 00:20:32.190158] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:17.695 [2024-07-23 00:20:32.198397] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:17.695 [2024-07-23 00:20:32.198558] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:17.695 [2024-07-23 00:20:32.198714] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:17.695 [2024-07-23 00:20:32.198753] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:17.695 [2024-07-23 00:20:32.198859] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:17.695 [2024-07-23 00:20:32.198894] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:17.695 [2024-07-23 00:20:32.198927] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:17.695 [2024-07-23 00:20:32.198957] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:17.695 [2024-07-23 00:20:32.199085] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:17.695 [2024-07-23 00:20:32.199133] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:17.696 [2024-07-23 00:20:32.199167] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:17.696 [2024-07-23 00:20:32.199200] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:17.696 [2024-07-23 00:20:32.199328] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:17.696 [2024-07-23 00:20:32.199468] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:17.696 [2024-07-23 00:20:32.199503] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:17.696 [2024-07-23 00:20:32.199532] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:17.696 [2024-07-23 00:20:32.199646] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:17.696 [2024-07-23 00:20:32.199685] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:16:17.696 [2024-07-23 00:20:32.199720] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:17.696 [2024-07-23 00:20:32.199750] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:17.696 [2024-07-23 00:20:32.199821] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:17.696 [2024-07-23 00:20:32.199909] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:17.696 [2024-07-23 00:20:32.199949] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:17.696 [2024-07-23 00:20:32.199979] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:17.696 [2024-07-23 00:20:32.200052] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:17.696 [2024-07-23 00:20:32.200092] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:17.696 [2024-07-23 00:20:32.200168] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:17.696 [2024-07-23 00:20:32.200212] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:17.696 [2024-07-23 00:20:32.200451] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 259.163 ms, result 0 00:16:17.696 true 00:16:17.696 00:20:32 ftl.ftl_bdevperf -- ftl/bdevperf.sh@37 -- # killprocess 88383 00:16:17.696 00:20:32 ftl.ftl_bdevperf -- common/autotest_common.sh@946 -- # '[' -z 88383 ']' 00:16:17.696 00:20:32 ftl.ftl_bdevperf -- common/autotest_common.sh@950 -- # kill -0 88383 00:16:17.696 00:20:32 ftl.ftl_bdevperf -- common/autotest_common.sh@951 -- # uname 00:16:17.696 00:20:32 ftl.ftl_bdevperf -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:16:17.696 00:20:32 ftl.ftl_bdevperf -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 88383 00:16:17.696 00:20:32 ftl.ftl_bdevperf -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:16:17.696 00:20:32 ftl.ftl_bdevperf -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:16:17.696 killing process with pid 88383 00:16:17.696 Received shutdown signal, test time was about 4.000000 seconds 00:16:17.696 00:16:17.696 Latency(us) 00:16:17.696 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:17.696 =================================================================================================================== 00:16:17.696 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:17.696 00:20:32 ftl.ftl_bdevperf -- common/autotest_common.sh@964 -- # echo 'killing process with pid 88383' 00:16:17.696 00:20:32 ftl.ftl_bdevperf -- common/autotest_common.sh@965 -- # kill 88383 00:16:17.696 00:20:32 ftl.ftl_bdevperf -- common/autotest_common.sh@970 -- # wait 88383 00:16:17.955 00:20:32 ftl.ftl_bdevperf -- ftl/bdevperf.sh@38 -- # trap - SIGINT SIGTERM EXIT 00:16:17.955 00:20:32 ftl.ftl_bdevperf -- ftl/bdevperf.sh@39 -- # timing_exit '/home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0' 00:16:17.955 00:20:32 ftl.ftl_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:17.955 00:20:32 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:16:18.214 Remove shared memory files 00:16:18.214 00:20:32 ftl.ftl_bdevperf -- ftl/bdevperf.sh@41 -- # remove_shm 00:16:18.214 00:20:32 ftl.ftl_bdevperf -- ftl/common.sh@204 -- # echo Remove shared memory files 00:16:18.214 00:20:32 ftl.ftl_bdevperf -- ftl/common.sh@205 -- # rm -f rm -f 00:16:18.214 00:20:32 ftl.ftl_bdevperf -- ftl/common.sh@206 -- # rm -f rm -f 00:16:18.214 00:20:32 ftl.ftl_bdevperf -- ftl/common.sh@207 -- # rm -f rm -f 00:16:18.214 00:20:32 ftl.ftl_bdevperf -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:16:18.214 00:20:32 ftl.ftl_bdevperf -- ftl/common.sh@209 -- # rm -f rm -f 00:16:18.214 ************************************ 00:16:18.214 END TEST ftl_bdevperf 00:16:18.214 ************************************ 00:16:18.214 00:16:18.214 real 0m20.989s 00:16:18.214 user 0m23.408s 00:16:18.214 sys 0m1.161s 00:16:18.214 00:20:32 ftl.ftl_bdevperf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:16:18.214 00:20:32 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:16:18.214 00:20:32 ftl -- ftl/ftl.sh@75 -- # run_test ftl_trim /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:16:18.214 00:20:32 ftl -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:16:18.214 00:20:32 ftl -- common/autotest_common.sh@1103 -- # xtrace_disable 00:16:18.214 00:20:32 ftl -- common/autotest_common.sh@10 -- # set +x 00:16:18.214 ************************************ 00:16:18.214 START TEST ftl_trim 00:16:18.214 ************************************ 00:16:18.214 00:20:32 ftl.ftl_trim -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:16:18.214 * Looking for test storage... 00:16:18.214 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:16:18.214 00:20:32 ftl.ftl_trim -- ftl/trim.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:16:18.473 00:20:32 ftl.ftl_trim -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 00:16:18.473 00:20:32 ftl.ftl_trim -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:16:18.473 00:20:32 ftl.ftl_trim -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:16:18.473 00:20:32 ftl.ftl_trim -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:16:18.473 00:20:32 ftl.ftl_trim -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:16:18.473 00:20:32 ftl.ftl_trim -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:18.473 00:20:32 ftl.ftl_trim -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:16:18.473 00:20:32 ftl.ftl_trim -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:16:18.473 00:20:32 ftl.ftl_trim -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:18.473 00:20:32 ftl.ftl_trim -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:18.473 00:20:32 ftl.ftl_trim -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:16:18.473 00:20:32 ftl.ftl_trim -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:16:18.473 00:20:32 ftl.ftl_trim -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:16:18.473 00:20:32 ftl.ftl_trim -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:16:18.473 00:20:32 ftl.ftl_trim -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:16:18.473 00:20:32 ftl.ftl_trim -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:16:18.473 00:20:32 ftl.ftl_trim -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:18.473 00:20:32 ftl.ftl_trim -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:18.473 00:20:32 ftl.ftl_trim -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:16:18.473 00:20:32 ftl.ftl_trim -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:16:18.473 00:20:32 ftl.ftl_trim -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:16:18.473 00:20:32 ftl.ftl_trim -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:16:18.473 00:20:32 ftl.ftl_trim -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:16:18.473 00:20:32 ftl.ftl_trim -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:16:18.473 00:20:32 ftl.ftl_trim -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:16:18.473 00:20:32 ftl.ftl_trim -- ftl/common.sh@23 -- # spdk_ini_pid= 00:16:18.473 00:20:32 ftl.ftl_trim -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:18.473 00:20:32 ftl.ftl_trim -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:18.473 00:20:32 ftl.ftl_trim -- ftl/trim.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:18.473 00:20:32 ftl.ftl_trim -- ftl/trim.sh@23 -- # device=0000:00:11.0 00:16:18.473 00:20:32 ftl.ftl_trim -- ftl/trim.sh@24 -- # cache_device=0000:00:10.0 00:16:18.473 00:20:32 ftl.ftl_trim -- ftl/trim.sh@25 -- # timeout=240 00:16:18.473 00:20:32 ftl.ftl_trim -- ftl/trim.sh@26 -- # data_size_in_blocks=65536 00:16:18.473 00:20:32 ftl.ftl_trim -- ftl/trim.sh@27 -- # unmap_size_in_blocks=1024 00:16:18.473 00:20:32 ftl.ftl_trim -- ftl/trim.sh@29 -- # [[ y != y ]] 00:16:18.473 00:20:32 ftl.ftl_trim -- ftl/trim.sh@34 -- # export FTL_BDEV_NAME=ftl0 00:16:18.473 00:20:32 ftl.ftl_trim -- ftl/trim.sh@34 -- # FTL_BDEV_NAME=ftl0 00:16:18.473 00:20:32 ftl.ftl_trim -- ftl/trim.sh@35 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:16:18.473 00:20:32 ftl.ftl_trim -- ftl/trim.sh@35 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:16:18.473 00:20:32 ftl.ftl_trim -- ftl/trim.sh@37 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:16:18.473 00:20:32 ftl.ftl_trim -- ftl/trim.sh@40 -- # svcpid=88724 00:16:18.473 00:20:32 ftl.ftl_trim -- ftl/trim.sh@41 -- # waitforlisten 88724 00:16:18.473 00:20:32 ftl.ftl_trim -- ftl/trim.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:16:18.473 00:20:32 ftl.ftl_trim -- common/autotest_common.sh@827 -- # '[' -z 88724 ']' 00:16:18.473 00:20:32 ftl.ftl_trim -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:18.473 00:20:32 ftl.ftl_trim -- common/autotest_common.sh@832 -- # local max_retries=100 00:16:18.473 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:18.473 00:20:32 ftl.ftl_trim -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:18.473 00:20:32 ftl.ftl_trim -- common/autotest_common.sh@836 -- # xtrace_disable 00:16:18.473 00:20:32 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:16:18.473 [2024-07-23 00:20:33.025591] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:16:18.473 [2024-07-23 00:20:33.025717] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88724 ] 00:16:18.779 [2024-07-23 00:20:33.176572] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:18.779 [2024-07-23 00:20:33.220963] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:18.779 [2024-07-23 00:20:33.221179] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:18.779 [2024-07-23 00:20:33.221048] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:19.408 00:20:33 ftl.ftl_trim -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:16:19.408 00:20:33 ftl.ftl_trim -- common/autotest_common.sh@860 -- # return 0 00:16:19.408 00:20:33 ftl.ftl_trim -- ftl/trim.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:16:19.408 00:20:33 ftl.ftl_trim -- ftl/common.sh@54 -- # local name=nvme0 00:16:19.408 00:20:33 ftl.ftl_trim -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:16:19.408 00:20:33 ftl.ftl_trim -- ftl/common.sh@56 -- # local size=103424 00:16:19.408 00:20:33 ftl.ftl_trim -- ftl/common.sh@59 -- # local base_bdev 00:16:19.408 00:20:33 ftl.ftl_trim -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:16:19.408 00:20:34 ftl.ftl_trim -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:16:19.408 00:20:34 ftl.ftl_trim -- ftl/common.sh@62 -- # local base_size 00:16:19.408 00:20:34 ftl.ftl_trim -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:16:19.408 00:20:34 ftl.ftl_trim -- common/autotest_common.sh@1374 -- # local bdev_name=nvme0n1 00:16:19.408 00:20:34 ftl.ftl_trim -- common/autotest_common.sh@1375 -- # local bdev_info 00:16:19.408 00:20:34 ftl.ftl_trim -- common/autotest_common.sh@1376 -- # local bs 00:16:19.408 00:20:34 ftl.ftl_trim -- common/autotest_common.sh@1377 -- # local nb 00:16:19.408 00:20:34 ftl.ftl_trim -- common/autotest_common.sh@1378 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:16:19.666 00:20:34 ftl.ftl_trim -- common/autotest_common.sh@1378 -- # bdev_info='[ 00:16:19.666 { 00:16:19.666 "name": "nvme0n1", 00:16:19.666 "aliases": [ 00:16:19.666 "d48e3744-0db5-40ab-8011-bbdde8d622fa" 00:16:19.666 ], 00:16:19.666 "product_name": "NVMe disk", 00:16:19.666 "block_size": 4096, 00:16:19.666 "num_blocks": 1310720, 00:16:19.666 "uuid": "d48e3744-0db5-40ab-8011-bbdde8d622fa", 00:16:19.666 "assigned_rate_limits": { 00:16:19.666 "rw_ios_per_sec": 0, 00:16:19.666 "rw_mbytes_per_sec": 0, 00:16:19.666 "r_mbytes_per_sec": 0, 00:16:19.666 "w_mbytes_per_sec": 0 00:16:19.666 }, 00:16:19.666 "claimed": true, 00:16:19.666 "claim_type": "read_many_write_one", 00:16:19.666 "zoned": false, 00:16:19.666 "supported_io_types": { 00:16:19.666 "read": true, 00:16:19.666 "write": true, 00:16:19.666 "unmap": true, 00:16:19.666 "write_zeroes": true, 00:16:19.666 "flush": true, 00:16:19.666 "reset": true, 00:16:19.666 "compare": true, 00:16:19.666 "compare_and_write": false, 00:16:19.666 "abort": true, 00:16:19.666 "nvme_admin": true, 00:16:19.666 "nvme_io": true 00:16:19.666 }, 00:16:19.666 "driver_specific": { 00:16:19.666 "nvme": [ 00:16:19.666 { 00:16:19.666 "pci_address": "0000:00:11.0", 00:16:19.666 "trid": { 00:16:19.666 "trtype": "PCIe", 00:16:19.666 "traddr": "0000:00:11.0" 00:16:19.666 }, 00:16:19.666 "ctrlr_data": { 00:16:19.666 "cntlid": 0, 00:16:19.666 "vendor_id": "0x1b36", 00:16:19.666 "model_number": "QEMU NVMe Ctrl", 00:16:19.666 "serial_number": "12341", 00:16:19.666 "firmware_revision": "8.0.0", 00:16:19.666 "subnqn": "nqn.2019-08.org.qemu:12341", 00:16:19.666 "oacs": { 00:16:19.666 "security": 0, 00:16:19.666 "format": 1, 00:16:19.666 "firmware": 0, 00:16:19.666 "ns_manage": 1 00:16:19.666 }, 00:16:19.666 "multi_ctrlr": false, 00:16:19.666 "ana_reporting": false 00:16:19.666 }, 00:16:19.666 "vs": { 00:16:19.666 "nvme_version": "1.4" 00:16:19.666 }, 00:16:19.666 "ns_data": { 00:16:19.666 "id": 1, 00:16:19.666 "can_share": false 00:16:19.666 } 00:16:19.666 } 00:16:19.666 ], 00:16:19.666 "mp_policy": "active_passive" 00:16:19.666 } 00:16:19.666 } 00:16:19.666 ]' 00:16:19.666 00:20:34 ftl.ftl_trim -- common/autotest_common.sh@1379 -- # jq '.[] .block_size' 00:16:19.666 00:20:34 ftl.ftl_trim -- common/autotest_common.sh@1379 -- # bs=4096 00:16:19.666 00:20:34 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # jq '.[] .num_blocks' 00:16:19.666 00:20:34 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # nb=1310720 00:16:19.667 00:20:34 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # bdev_size=5120 00:16:19.667 00:20:34 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # echo 5120 00:16:19.925 00:20:34 ftl.ftl_trim -- ftl/common.sh@63 -- # base_size=5120 00:16:19.925 00:20:34 ftl.ftl_trim -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:16:19.925 00:20:34 ftl.ftl_trim -- ftl/common.sh@67 -- # clear_lvols 00:16:19.925 00:20:34 ftl.ftl_trim -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:16:19.926 00:20:34 ftl.ftl_trim -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:16:19.926 00:20:34 ftl.ftl_trim -- ftl/common.sh@28 -- # stores=e4bb2305-7407-4651-9294-216739fdb3f6 00:16:19.926 00:20:34 ftl.ftl_trim -- ftl/common.sh@29 -- # for lvs in $stores 00:16:19.926 00:20:34 ftl.ftl_trim -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u e4bb2305-7407-4651-9294-216739fdb3f6 00:16:20.184 00:20:34 ftl.ftl_trim -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:16:20.443 00:20:34 ftl.ftl_trim -- ftl/common.sh@68 -- # lvs=3cedc874-5b09-4356-92c4-040f38f29b9b 00:16:20.443 00:20:34 ftl.ftl_trim -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 3cedc874-5b09-4356-92c4-040f38f29b9b 00:16:20.714 00:20:35 ftl.ftl_trim -- ftl/trim.sh@43 -- # split_bdev=6a3ef4ab-1282-46c0-b9b5-097c70082caf 00:16:20.714 00:20:35 ftl.ftl_trim -- ftl/trim.sh@44 -- # create_nv_cache_bdev nvc0 0000:00:10.0 6a3ef4ab-1282-46c0-b9b5-097c70082caf 00:16:20.714 00:20:35 ftl.ftl_trim -- ftl/common.sh@35 -- # local name=nvc0 00:16:20.714 00:20:35 ftl.ftl_trim -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:16:20.714 00:20:35 ftl.ftl_trim -- ftl/common.sh@37 -- # local base_bdev=6a3ef4ab-1282-46c0-b9b5-097c70082caf 00:16:20.714 00:20:35 ftl.ftl_trim -- ftl/common.sh@38 -- # local cache_size= 00:16:20.714 00:20:35 ftl.ftl_trim -- ftl/common.sh@41 -- # get_bdev_size 6a3ef4ab-1282-46c0-b9b5-097c70082caf 00:16:20.714 00:20:35 ftl.ftl_trim -- common/autotest_common.sh@1374 -- # local bdev_name=6a3ef4ab-1282-46c0-b9b5-097c70082caf 00:16:20.714 00:20:35 ftl.ftl_trim -- common/autotest_common.sh@1375 -- # local bdev_info 00:16:20.714 00:20:35 ftl.ftl_trim -- common/autotest_common.sh@1376 -- # local bs 00:16:20.714 00:20:35 ftl.ftl_trim -- common/autotest_common.sh@1377 -- # local nb 00:16:20.714 00:20:35 ftl.ftl_trim -- common/autotest_common.sh@1378 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 6a3ef4ab-1282-46c0-b9b5-097c70082caf 00:16:20.714 00:20:35 ftl.ftl_trim -- common/autotest_common.sh@1378 -- # bdev_info='[ 00:16:20.714 { 00:16:20.714 "name": "6a3ef4ab-1282-46c0-b9b5-097c70082caf", 00:16:20.714 "aliases": [ 00:16:20.714 "lvs/nvme0n1p0" 00:16:20.714 ], 00:16:20.714 "product_name": "Logical Volume", 00:16:20.714 "block_size": 4096, 00:16:20.714 "num_blocks": 26476544, 00:16:20.714 "uuid": "6a3ef4ab-1282-46c0-b9b5-097c70082caf", 00:16:20.714 "assigned_rate_limits": { 00:16:20.714 "rw_ios_per_sec": 0, 00:16:20.714 "rw_mbytes_per_sec": 0, 00:16:20.714 "r_mbytes_per_sec": 0, 00:16:20.714 "w_mbytes_per_sec": 0 00:16:20.714 }, 00:16:20.714 "claimed": false, 00:16:20.714 "zoned": false, 00:16:20.714 "supported_io_types": { 00:16:20.714 "read": true, 00:16:20.714 "write": true, 00:16:20.714 "unmap": true, 00:16:20.714 "write_zeroes": true, 00:16:20.714 "flush": false, 00:16:20.714 "reset": true, 00:16:20.714 "compare": false, 00:16:20.714 "compare_and_write": false, 00:16:20.714 "abort": false, 00:16:20.714 "nvme_admin": false, 00:16:20.714 "nvme_io": false 00:16:20.714 }, 00:16:20.714 "driver_specific": { 00:16:20.714 "lvol": { 00:16:20.714 "lvol_store_uuid": "3cedc874-5b09-4356-92c4-040f38f29b9b", 00:16:20.714 "base_bdev": "nvme0n1", 00:16:20.714 "thin_provision": true, 00:16:20.714 "num_allocated_clusters": 0, 00:16:20.714 "snapshot": false, 00:16:20.714 "clone": false, 00:16:20.714 "esnap_clone": false 00:16:20.714 } 00:16:20.715 } 00:16:20.715 } 00:16:20.715 ]' 00:16:20.715 00:20:35 ftl.ftl_trim -- common/autotest_common.sh@1379 -- # jq '.[] .block_size' 00:16:20.715 00:20:35 ftl.ftl_trim -- common/autotest_common.sh@1379 -- # bs=4096 00:16:20.715 00:20:35 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # jq '.[] .num_blocks' 00:16:20.976 00:20:35 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # nb=26476544 00:16:20.976 00:20:35 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # bdev_size=103424 00:16:20.976 00:20:35 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # echo 103424 00:16:20.976 00:20:35 ftl.ftl_trim -- ftl/common.sh@41 -- # local base_size=5171 00:16:20.976 00:20:35 ftl.ftl_trim -- ftl/common.sh@44 -- # local nvc_bdev 00:16:20.977 00:20:35 ftl.ftl_trim -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:16:21.234 00:20:35 ftl.ftl_trim -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:16:21.234 00:20:35 ftl.ftl_trim -- ftl/common.sh@47 -- # [[ -z '' ]] 00:16:21.234 00:20:35 ftl.ftl_trim -- ftl/common.sh@48 -- # get_bdev_size 6a3ef4ab-1282-46c0-b9b5-097c70082caf 00:16:21.234 00:20:35 ftl.ftl_trim -- common/autotest_common.sh@1374 -- # local bdev_name=6a3ef4ab-1282-46c0-b9b5-097c70082caf 00:16:21.234 00:20:35 ftl.ftl_trim -- common/autotest_common.sh@1375 -- # local bdev_info 00:16:21.234 00:20:35 ftl.ftl_trim -- common/autotest_common.sh@1376 -- # local bs 00:16:21.235 00:20:35 ftl.ftl_trim -- common/autotest_common.sh@1377 -- # local nb 00:16:21.235 00:20:35 ftl.ftl_trim -- common/autotest_common.sh@1378 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 6a3ef4ab-1282-46c0-b9b5-097c70082caf 00:16:21.235 00:20:35 ftl.ftl_trim -- common/autotest_common.sh@1378 -- # bdev_info='[ 00:16:21.235 { 00:16:21.235 "name": "6a3ef4ab-1282-46c0-b9b5-097c70082caf", 00:16:21.235 "aliases": [ 00:16:21.235 "lvs/nvme0n1p0" 00:16:21.235 ], 00:16:21.235 "product_name": "Logical Volume", 00:16:21.235 "block_size": 4096, 00:16:21.235 "num_blocks": 26476544, 00:16:21.235 "uuid": "6a3ef4ab-1282-46c0-b9b5-097c70082caf", 00:16:21.235 "assigned_rate_limits": { 00:16:21.235 "rw_ios_per_sec": 0, 00:16:21.235 "rw_mbytes_per_sec": 0, 00:16:21.235 "r_mbytes_per_sec": 0, 00:16:21.235 "w_mbytes_per_sec": 0 00:16:21.235 }, 00:16:21.235 "claimed": false, 00:16:21.235 "zoned": false, 00:16:21.235 "supported_io_types": { 00:16:21.235 "read": true, 00:16:21.235 "write": true, 00:16:21.235 "unmap": true, 00:16:21.235 "write_zeroes": true, 00:16:21.235 "flush": false, 00:16:21.235 "reset": true, 00:16:21.235 "compare": false, 00:16:21.235 "compare_and_write": false, 00:16:21.235 "abort": false, 00:16:21.235 "nvme_admin": false, 00:16:21.235 "nvme_io": false 00:16:21.235 }, 00:16:21.235 "driver_specific": { 00:16:21.235 "lvol": { 00:16:21.235 "lvol_store_uuid": "3cedc874-5b09-4356-92c4-040f38f29b9b", 00:16:21.235 "base_bdev": "nvme0n1", 00:16:21.235 "thin_provision": true, 00:16:21.235 "num_allocated_clusters": 0, 00:16:21.235 "snapshot": false, 00:16:21.235 "clone": false, 00:16:21.235 "esnap_clone": false 00:16:21.235 } 00:16:21.235 } 00:16:21.235 } 00:16:21.235 ]' 00:16:21.235 00:20:35 ftl.ftl_trim -- common/autotest_common.sh@1379 -- # jq '.[] .block_size' 00:16:21.235 00:20:35 ftl.ftl_trim -- common/autotest_common.sh@1379 -- # bs=4096 00:16:21.235 00:20:35 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # jq '.[] .num_blocks' 00:16:21.493 00:20:35 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # nb=26476544 00:16:21.493 00:20:35 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # bdev_size=103424 00:16:21.493 00:20:35 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # echo 103424 00:16:21.493 00:20:35 ftl.ftl_trim -- ftl/common.sh@48 -- # cache_size=5171 00:16:21.493 00:20:35 ftl.ftl_trim -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:16:21.493 00:20:36 ftl.ftl_trim -- ftl/trim.sh@44 -- # nv_cache=nvc0n1p0 00:16:21.493 00:20:36 ftl.ftl_trim -- ftl/trim.sh@46 -- # l2p_percentage=60 00:16:21.493 00:20:36 ftl.ftl_trim -- ftl/trim.sh@47 -- # get_bdev_size 6a3ef4ab-1282-46c0-b9b5-097c70082caf 00:16:21.493 00:20:36 ftl.ftl_trim -- common/autotest_common.sh@1374 -- # local bdev_name=6a3ef4ab-1282-46c0-b9b5-097c70082caf 00:16:21.493 00:20:36 ftl.ftl_trim -- common/autotest_common.sh@1375 -- # local bdev_info 00:16:21.493 00:20:36 ftl.ftl_trim -- common/autotest_common.sh@1376 -- # local bs 00:16:21.493 00:20:36 ftl.ftl_trim -- common/autotest_common.sh@1377 -- # local nb 00:16:21.493 00:20:36 ftl.ftl_trim -- common/autotest_common.sh@1378 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 6a3ef4ab-1282-46c0-b9b5-097c70082caf 00:16:21.752 00:20:36 ftl.ftl_trim -- common/autotest_common.sh@1378 -- # bdev_info='[ 00:16:21.752 { 00:16:21.752 "name": "6a3ef4ab-1282-46c0-b9b5-097c70082caf", 00:16:21.753 "aliases": [ 00:16:21.753 "lvs/nvme0n1p0" 00:16:21.753 ], 00:16:21.753 "product_name": "Logical Volume", 00:16:21.753 "block_size": 4096, 00:16:21.753 "num_blocks": 26476544, 00:16:21.753 "uuid": "6a3ef4ab-1282-46c0-b9b5-097c70082caf", 00:16:21.753 "assigned_rate_limits": { 00:16:21.753 "rw_ios_per_sec": 0, 00:16:21.753 "rw_mbytes_per_sec": 0, 00:16:21.753 "r_mbytes_per_sec": 0, 00:16:21.753 "w_mbytes_per_sec": 0 00:16:21.753 }, 00:16:21.753 "claimed": false, 00:16:21.753 "zoned": false, 00:16:21.753 "supported_io_types": { 00:16:21.753 "read": true, 00:16:21.753 "write": true, 00:16:21.753 "unmap": true, 00:16:21.753 "write_zeroes": true, 00:16:21.753 "flush": false, 00:16:21.753 "reset": true, 00:16:21.753 "compare": false, 00:16:21.753 "compare_and_write": false, 00:16:21.753 "abort": false, 00:16:21.753 "nvme_admin": false, 00:16:21.753 "nvme_io": false 00:16:21.753 }, 00:16:21.753 "driver_specific": { 00:16:21.753 "lvol": { 00:16:21.753 "lvol_store_uuid": "3cedc874-5b09-4356-92c4-040f38f29b9b", 00:16:21.753 "base_bdev": "nvme0n1", 00:16:21.753 "thin_provision": true, 00:16:21.753 "num_allocated_clusters": 0, 00:16:21.753 "snapshot": false, 00:16:21.753 "clone": false, 00:16:21.753 "esnap_clone": false 00:16:21.753 } 00:16:21.753 } 00:16:21.753 } 00:16:21.753 ]' 00:16:21.753 00:20:36 ftl.ftl_trim -- common/autotest_common.sh@1379 -- # jq '.[] .block_size' 00:16:21.753 00:20:36 ftl.ftl_trim -- common/autotest_common.sh@1379 -- # bs=4096 00:16:21.753 00:20:36 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # jq '.[] .num_blocks' 00:16:21.753 00:20:36 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # nb=26476544 00:16:21.753 00:20:36 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # bdev_size=103424 00:16:21.753 00:20:36 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # echo 103424 00:16:21.753 00:20:36 ftl.ftl_trim -- ftl/trim.sh@47 -- # l2p_dram_size_mb=60 00:16:21.753 00:20:36 ftl.ftl_trim -- ftl/trim.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 6a3ef4ab-1282-46c0-b9b5-097c70082caf -c nvc0n1p0 --core_mask 7 --l2p_dram_limit 60 --overprovisioning 10 00:16:22.013 [2024-07-23 00:20:36.538186] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:22.013 [2024-07-23 00:20:36.538248] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:16:22.013 [2024-07-23 00:20:36.538281] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:16:22.013 [2024-07-23 00:20:36.538292] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:22.013 [2024-07-23 00:20:36.540939] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:22.013 [2024-07-23 00:20:36.540978] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:22.013 [2024-07-23 00:20:36.540994] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.610 ms 00:16:22.013 [2024-07-23 00:20:36.541016] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:22.013 [2024-07-23 00:20:36.541168] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:16:22.013 [2024-07-23 00:20:36.541420] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:16:22.013 [2024-07-23 00:20:36.541453] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:22.013 [2024-07-23 00:20:36.541464] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:22.013 [2024-07-23 00:20:36.541490] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.314 ms 00:16:22.013 [2024-07-23 00:20:36.541503] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:22.013 [2024-07-23 00:20:36.541615] mngt/ftl_mngt_md.c: 568:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 05fcf3be-df9c-41ef-ae3d-41db770ccbb0 00:16:22.013 [2024-07-23 00:20:36.543036] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:22.013 [2024-07-23 00:20:36.543071] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:16:22.013 [2024-07-23 00:20:36.543084] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:16:22.013 [2024-07-23 00:20:36.543096] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:22.013 [2024-07-23 00:20:36.550556] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:22.013 [2024-07-23 00:20:36.550587] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:22.013 [2024-07-23 00:20:36.550599] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.390 ms 00:16:22.013 [2024-07-23 00:20:36.550612] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:22.013 [2024-07-23 00:20:36.550739] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:22.013 [2024-07-23 00:20:36.550760] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:22.013 [2024-07-23 00:20:36.550771] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:16:22.013 [2024-07-23 00:20:36.550797] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:22.013 [2024-07-23 00:20:36.550840] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:22.013 [2024-07-23 00:20:36.550854] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:16:22.013 [2024-07-23 00:20:36.550864] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:16:22.013 [2024-07-23 00:20:36.550878] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:22.013 [2024-07-23 00:20:36.550914] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:16:22.013 [2024-07-23 00:20:36.552717] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:22.013 [2024-07-23 00:20:36.552745] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:22.013 [2024-07-23 00:20:36.552759] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.809 ms 00:16:22.013 [2024-07-23 00:20:36.552772] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:22.013 [2024-07-23 00:20:36.552834] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:22.013 [2024-07-23 00:20:36.552845] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:16:22.013 [2024-07-23 00:20:36.552858] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:16:22.013 [2024-07-23 00:20:36.552868] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:22.013 [2024-07-23 00:20:36.552907] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:16:22.013 [2024-07-23 00:20:36.553096] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:16:22.013 [2024-07-23 00:20:36.553115] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:16:22.013 [2024-07-23 00:20:36.553129] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:16:22.013 [2024-07-23 00:20:36.553156] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:16:22.013 [2024-07-23 00:20:36.553168] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:16:22.013 [2024-07-23 00:20:36.553182] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:16:22.013 [2024-07-23 00:20:36.553206] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:16:22.013 [2024-07-23 00:20:36.553219] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:16:22.013 [2024-07-23 00:20:36.553228] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:16:22.013 [2024-07-23 00:20:36.553241] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:22.013 [2024-07-23 00:20:36.553253] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:16:22.013 [2024-07-23 00:20:36.553286] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.339 ms 00:16:22.014 [2024-07-23 00:20:36.553296] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:22.014 [2024-07-23 00:20:36.553384] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:22.014 [2024-07-23 00:20:36.553395] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:16:22.014 [2024-07-23 00:20:36.553412] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:16:22.014 [2024-07-23 00:20:36.553434] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:22.014 [2024-07-23 00:20:36.553556] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:16:22.014 [2024-07-23 00:20:36.553569] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:16:22.014 [2024-07-23 00:20:36.553584] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:22.014 [2024-07-23 00:20:36.553594] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:22.014 [2024-07-23 00:20:36.553607] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:16:22.014 [2024-07-23 00:20:36.553616] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:16:22.014 [2024-07-23 00:20:36.553627] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:16:22.014 [2024-07-23 00:20:36.553637] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:16:22.014 [2024-07-23 00:20:36.553649] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:16:22.014 [2024-07-23 00:20:36.553658] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:22.014 [2024-07-23 00:20:36.553669] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:16:22.014 [2024-07-23 00:20:36.553678] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:16:22.014 [2024-07-23 00:20:36.553689] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:22.014 [2024-07-23 00:20:36.553699] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:16:22.014 [2024-07-23 00:20:36.553713] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:16:22.014 [2024-07-23 00:20:36.553723] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:22.014 [2024-07-23 00:20:36.553734] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:16:22.014 [2024-07-23 00:20:36.553743] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:16:22.014 [2024-07-23 00:20:36.553756] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:22.014 [2024-07-23 00:20:36.553765] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:16:22.014 [2024-07-23 00:20:36.553777] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:16:22.014 [2024-07-23 00:20:36.553785] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:22.014 [2024-07-23 00:20:36.553796] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:16:22.014 [2024-07-23 00:20:36.553806] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:16:22.014 [2024-07-23 00:20:36.553817] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:22.014 [2024-07-23 00:20:36.553827] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:16:22.014 [2024-07-23 00:20:36.553854] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:16:22.014 [2024-07-23 00:20:36.553864] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:22.014 [2024-07-23 00:20:36.553875] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:16:22.014 [2024-07-23 00:20:36.553884] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:16:22.014 [2024-07-23 00:20:36.553897] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:22.014 [2024-07-23 00:20:36.553907] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:16:22.014 [2024-07-23 00:20:36.553919] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:16:22.014 [2024-07-23 00:20:36.553928] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:22.014 [2024-07-23 00:20:36.553939] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:16:22.014 [2024-07-23 00:20:36.553962] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:16:22.014 [2024-07-23 00:20:36.553974] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:22.014 [2024-07-23 00:20:36.553983] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:16:22.014 [2024-07-23 00:20:36.553995] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:16:22.014 [2024-07-23 00:20:36.554004] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:22.014 [2024-07-23 00:20:36.554015] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:16:22.014 [2024-07-23 00:20:36.554025] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:16:22.014 [2024-07-23 00:20:36.554036] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:22.014 [2024-07-23 00:20:36.554044] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:16:22.014 [2024-07-23 00:20:36.554058] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:16:22.014 [2024-07-23 00:20:36.554078] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:22.014 [2024-07-23 00:20:36.554093] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:22.014 [2024-07-23 00:20:36.554103] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:16:22.014 [2024-07-23 00:20:36.554114] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:16:22.014 [2024-07-23 00:20:36.554123] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:16:22.014 [2024-07-23 00:20:36.554135] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:16:22.014 [2024-07-23 00:20:36.554144] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:16:22.014 [2024-07-23 00:20:36.554156] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:16:22.014 [2024-07-23 00:20:36.554169] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:16:22.014 [2024-07-23 00:20:36.554184] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:22.014 [2024-07-23 00:20:36.554195] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:16:22.014 [2024-07-23 00:20:36.554208] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:16:22.014 [2024-07-23 00:20:36.554219] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:16:22.014 [2024-07-23 00:20:36.554233] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:16:22.014 [2024-07-23 00:20:36.554243] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:16:22.014 [2024-07-23 00:20:36.554256] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:16:22.014 [2024-07-23 00:20:36.554276] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:16:22.014 [2024-07-23 00:20:36.554292] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:16:22.014 [2024-07-23 00:20:36.554302] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:16:22.014 [2024-07-23 00:20:36.554315] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:16:22.014 [2024-07-23 00:20:36.554324] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:16:22.014 [2024-07-23 00:20:36.554338] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:16:22.014 [2024-07-23 00:20:36.554349] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:16:22.014 [2024-07-23 00:20:36.554362] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:16:22.014 [2024-07-23 00:20:36.554372] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:16:22.014 [2024-07-23 00:20:36.554385] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:22.014 [2024-07-23 00:20:36.554399] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:16:22.014 [2024-07-23 00:20:36.554412] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:16:22.014 [2024-07-23 00:20:36.554422] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:16:22.014 [2024-07-23 00:20:36.554435] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:16:22.014 [2024-07-23 00:20:36.554446] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:22.014 [2024-07-23 00:20:36.554458] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:16:22.014 [2024-07-23 00:20:36.554468] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.946 ms 00:16:22.014 [2024-07-23 00:20:36.554483] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:22.014 [2024-07-23 00:20:36.554562] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:16:22.014 [2024-07-23 00:20:36.554577] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:16:25.304 [2024-07-23 00:20:39.894057] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:25.304 [2024-07-23 00:20:39.894129] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:16:25.304 [2024-07-23 00:20:39.894146] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3344.914 ms 00:16:25.304 [2024-07-23 00:20:39.894163] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:25.304 [2024-07-23 00:20:39.905535] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:25.304 [2024-07-23 00:20:39.905584] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:25.304 [2024-07-23 00:20:39.905601] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.231 ms 00:16:25.304 [2024-07-23 00:20:39.905614] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:25.304 [2024-07-23 00:20:39.905771] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:25.304 [2024-07-23 00:20:39.905789] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:16:25.304 [2024-07-23 00:20:39.905801] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:16:25.304 [2024-07-23 00:20:39.905818] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:25.304 [2024-07-23 00:20:39.926280] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:25.304 [2024-07-23 00:20:39.926329] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:25.304 [2024-07-23 00:20:39.926347] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.415 ms 00:16:25.304 [2024-07-23 00:20:39.926364] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:25.304 [2024-07-23 00:20:39.926492] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:25.304 [2024-07-23 00:20:39.926531] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:25.304 [2024-07-23 00:20:39.926545] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:16:25.304 [2024-07-23 00:20:39.926565] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:25.304 [2024-07-23 00:20:39.927059] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:25.304 [2024-07-23 00:20:39.927085] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:25.304 [2024-07-23 00:20:39.927099] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.433 ms 00:16:25.305 [2024-07-23 00:20:39.927114] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:25.305 [2024-07-23 00:20:39.927309] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:25.305 [2024-07-23 00:20:39.927334] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:25.305 [2024-07-23 00:20:39.927348] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.134 ms 00:16:25.305 [2024-07-23 00:20:39.927363] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:25.305 [2024-07-23 00:20:39.935290] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:25.305 [2024-07-23 00:20:39.935332] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:25.305 [2024-07-23 00:20:39.935347] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.863 ms 00:16:25.305 [2024-07-23 00:20:39.935361] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:25.305 [2024-07-23 00:20:39.944377] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:16:25.305 [2024-07-23 00:20:39.961050] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:25.305 [2024-07-23 00:20:39.961093] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:16:25.305 [2024-07-23 00:20:39.961128] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.549 ms 00:16:25.305 [2024-07-23 00:20:39.961138] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:25.598 [2024-07-23 00:20:40.044090] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:25.598 [2024-07-23 00:20:40.044155] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:16:25.598 [2024-07-23 00:20:40.044174] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 82.956 ms 00:16:25.598 [2024-07-23 00:20:40.044186] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:25.598 [2024-07-23 00:20:40.044466] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:25.598 [2024-07-23 00:20:40.044483] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:16:25.598 [2024-07-23 00:20:40.044498] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.148 ms 00:16:25.598 [2024-07-23 00:20:40.044508] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:25.598 [2024-07-23 00:20:40.048297] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:25.598 [2024-07-23 00:20:40.048333] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:16:25.598 [2024-07-23 00:20:40.048349] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.731 ms 00:16:25.598 [2024-07-23 00:20:40.048359] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:25.598 [2024-07-23 00:20:40.051507] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:25.598 [2024-07-23 00:20:40.051539] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:16:25.598 [2024-07-23 00:20:40.051556] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.058 ms 00:16:25.598 [2024-07-23 00:20:40.051566] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:25.598 [2024-07-23 00:20:40.051883] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:25.598 [2024-07-23 00:20:40.051915] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:16:25.598 [2024-07-23 00:20:40.051930] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.246 ms 00:16:25.598 [2024-07-23 00:20:40.051940] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:25.598 [2024-07-23 00:20:40.091664] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:25.598 [2024-07-23 00:20:40.091714] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:16:25.598 [2024-07-23 00:20:40.091733] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.694 ms 00:16:25.598 [2024-07-23 00:20:40.091744] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:25.598 [2024-07-23 00:20:40.096256] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:25.598 [2024-07-23 00:20:40.096304] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:16:25.598 [2024-07-23 00:20:40.096319] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.425 ms 00:16:25.598 [2024-07-23 00:20:40.096330] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:25.598 [2024-07-23 00:20:40.099750] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:25.598 [2024-07-23 00:20:40.099781] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:16:25.598 [2024-07-23 00:20:40.099796] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.340 ms 00:16:25.598 [2024-07-23 00:20:40.099806] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:25.598 [2024-07-23 00:20:40.103726] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:25.598 [2024-07-23 00:20:40.103759] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:16:25.598 [2024-07-23 00:20:40.103775] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.847 ms 00:16:25.598 [2024-07-23 00:20:40.103785] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:25.598 [2024-07-23 00:20:40.103867] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:25.598 [2024-07-23 00:20:40.103881] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:16:25.598 [2024-07-23 00:20:40.103896] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:16:25.598 [2024-07-23 00:20:40.103905] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:25.598 [2024-07-23 00:20:40.104009] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:25.598 [2024-07-23 00:20:40.104021] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:16:25.598 [2024-07-23 00:20:40.104034] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:16:25.598 [2024-07-23 00:20:40.104044] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:25.598 [2024-07-23 00:20:40.105309] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:16:25.598 [2024-07-23 00:20:40.106256] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3572.565 ms, result 0 00:16:25.598 [2024-07-23 00:20:40.107229] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:16:25.598 { 00:16:25.598 "name": "ftl0", 00:16:25.598 "uuid": "05fcf3be-df9c-41ef-ae3d-41db770ccbb0" 00:16:25.598 } 00:16:25.598 00:20:40 ftl.ftl_trim -- ftl/trim.sh@51 -- # waitforbdev ftl0 00:16:25.598 00:20:40 ftl.ftl_trim -- common/autotest_common.sh@895 -- # local bdev_name=ftl0 00:16:25.598 00:20:40 ftl.ftl_trim -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:16:25.598 00:20:40 ftl.ftl_trim -- common/autotest_common.sh@897 -- # local i 00:16:25.598 00:20:40 ftl.ftl_trim -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:16:25.598 00:20:40 ftl.ftl_trim -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:16:25.598 00:20:40 ftl.ftl_trim -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:16:25.877 00:20:40 ftl.ftl_trim -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:16:25.877 [ 00:16:25.877 { 00:16:25.877 "name": "ftl0", 00:16:25.877 "aliases": [ 00:16:25.877 "05fcf3be-df9c-41ef-ae3d-41db770ccbb0" 00:16:25.877 ], 00:16:25.877 "product_name": "FTL disk", 00:16:25.877 "block_size": 4096, 00:16:25.877 "num_blocks": 23592960, 00:16:25.877 "uuid": "05fcf3be-df9c-41ef-ae3d-41db770ccbb0", 00:16:25.877 "assigned_rate_limits": { 00:16:25.877 "rw_ios_per_sec": 0, 00:16:25.877 "rw_mbytes_per_sec": 0, 00:16:25.877 "r_mbytes_per_sec": 0, 00:16:25.877 "w_mbytes_per_sec": 0 00:16:25.877 }, 00:16:25.877 "claimed": false, 00:16:25.877 "zoned": false, 00:16:25.877 "supported_io_types": { 00:16:25.877 "read": true, 00:16:25.877 "write": true, 00:16:25.877 "unmap": true, 00:16:25.877 "write_zeroes": true, 00:16:25.877 "flush": true, 00:16:25.877 "reset": false, 00:16:25.877 "compare": false, 00:16:25.877 "compare_and_write": false, 00:16:25.877 "abort": false, 00:16:25.877 "nvme_admin": false, 00:16:25.877 "nvme_io": false 00:16:25.877 }, 00:16:25.877 "driver_specific": { 00:16:25.877 "ftl": { 00:16:25.877 "base_bdev": "6a3ef4ab-1282-46c0-b9b5-097c70082caf", 00:16:25.877 "cache": "nvc0n1p0" 00:16:25.877 } 00:16:25.877 } 00:16:25.877 } 00:16:25.877 ] 00:16:25.877 00:20:40 ftl.ftl_trim -- common/autotest_common.sh@903 -- # return 0 00:16:25.877 00:20:40 ftl.ftl_trim -- ftl/trim.sh@54 -- # echo '{"subsystems": [' 00:16:25.877 00:20:40 ftl.ftl_trim -- ftl/trim.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:16:26.136 00:20:40 ftl.ftl_trim -- ftl/trim.sh@56 -- # echo ']}' 00:16:26.136 00:20:40 ftl.ftl_trim -- ftl/trim.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 00:16:26.395 00:20:40 ftl.ftl_trim -- ftl/trim.sh@59 -- # bdev_info='[ 00:16:26.395 { 00:16:26.395 "name": "ftl0", 00:16:26.395 "aliases": [ 00:16:26.395 "05fcf3be-df9c-41ef-ae3d-41db770ccbb0" 00:16:26.395 ], 00:16:26.395 "product_name": "FTL disk", 00:16:26.395 "block_size": 4096, 00:16:26.395 "num_blocks": 23592960, 00:16:26.395 "uuid": "05fcf3be-df9c-41ef-ae3d-41db770ccbb0", 00:16:26.395 "assigned_rate_limits": { 00:16:26.395 "rw_ios_per_sec": 0, 00:16:26.395 "rw_mbytes_per_sec": 0, 00:16:26.395 "r_mbytes_per_sec": 0, 00:16:26.395 "w_mbytes_per_sec": 0 00:16:26.395 }, 00:16:26.395 "claimed": false, 00:16:26.395 "zoned": false, 00:16:26.395 "supported_io_types": { 00:16:26.395 "read": true, 00:16:26.395 "write": true, 00:16:26.395 "unmap": true, 00:16:26.395 "write_zeroes": true, 00:16:26.395 "flush": true, 00:16:26.395 "reset": false, 00:16:26.395 "compare": false, 00:16:26.395 "compare_and_write": false, 00:16:26.395 "abort": false, 00:16:26.395 "nvme_admin": false, 00:16:26.395 "nvme_io": false 00:16:26.395 }, 00:16:26.395 "driver_specific": { 00:16:26.395 "ftl": { 00:16:26.395 "base_bdev": "6a3ef4ab-1282-46c0-b9b5-097c70082caf", 00:16:26.395 "cache": "nvc0n1p0" 00:16:26.395 } 00:16:26.395 } 00:16:26.395 } 00:16:26.395 ]' 00:16:26.395 00:20:40 ftl.ftl_trim -- ftl/trim.sh@60 -- # jq '.[] .num_blocks' 00:16:26.395 00:20:40 ftl.ftl_trim -- ftl/trim.sh@60 -- # nb=23592960 00:16:26.395 00:20:40 ftl.ftl_trim -- ftl/trim.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:16:26.656 [2024-07-23 00:20:41.076944] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:26.656 [2024-07-23 00:20:41.076999] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:16:26.656 [2024-07-23 00:20:41.077016] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:16:26.656 [2024-07-23 00:20:41.077029] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:26.656 [2024-07-23 00:20:41.077092] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:16:26.656 [2024-07-23 00:20:41.077816] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:26.656 [2024-07-23 00:20:41.077829] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:16:26.656 [2024-07-23 00:20:41.077843] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.702 ms 00:16:26.656 [2024-07-23 00:20:41.077853] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:26.656 [2024-07-23 00:20:41.078881] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:26.656 [2024-07-23 00:20:41.078909] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:16:26.656 [2024-07-23 00:20:41.078926] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.969 ms 00:16:26.656 [2024-07-23 00:20:41.078952] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:26.656 [2024-07-23 00:20:41.081826] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:26.656 [2024-07-23 00:20:41.081850] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:16:26.656 [2024-07-23 00:20:41.081864] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.816 ms 00:16:26.656 [2024-07-23 00:20:41.081875] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:26.656 [2024-07-23 00:20:41.087726] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:26.656 [2024-07-23 00:20:41.087855] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:16:26.656 [2024-07-23 00:20:41.087978] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.773 ms 00:16:26.656 [2024-07-23 00:20:41.088015] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:26.656 [2024-07-23 00:20:41.089696] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:26.656 [2024-07-23 00:20:41.089827] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:16:26.656 [2024-07-23 00:20:41.089904] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.506 ms 00:16:26.656 [2024-07-23 00:20:41.089940] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:26.656 [2024-07-23 00:20:41.094650] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:26.656 [2024-07-23 00:20:41.094688] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:16:26.656 [2024-07-23 00:20:41.094703] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.645 ms 00:16:26.656 [2024-07-23 00:20:41.094714] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:26.656 [2024-07-23 00:20:41.094987] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:26.656 [2024-07-23 00:20:41.095002] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:16:26.656 [2024-07-23 00:20:41.095018] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.212 ms 00:16:26.656 [2024-07-23 00:20:41.095028] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:26.656 [2024-07-23 00:20:41.096889] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:26.656 [2024-07-23 00:20:41.096921] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:16:26.656 [2024-07-23 00:20:41.096935] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.804 ms 00:16:26.656 [2024-07-23 00:20:41.096945] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:26.656 [2024-07-23 00:20:41.098512] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:26.656 [2024-07-23 00:20:41.098545] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:16:26.656 [2024-07-23 00:20:41.098559] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.497 ms 00:16:26.656 [2024-07-23 00:20:41.098569] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:26.656 [2024-07-23 00:20:41.099706] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:26.656 [2024-07-23 00:20:41.099738] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:16:26.656 [2024-07-23 00:20:41.099753] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.067 ms 00:16:26.656 [2024-07-23 00:20:41.099762] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:26.656 [2024-07-23 00:20:41.101042] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:26.656 [2024-07-23 00:20:41.101081] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:16:26.656 [2024-07-23 00:20:41.101096] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.129 ms 00:16:26.656 [2024-07-23 00:20:41.101105] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:26.656 [2024-07-23 00:20:41.101173] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:16:26.656 [2024-07-23 00:20:41.101189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:16:26.656 [2024-07-23 00:20:41.101204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:16:26.656 [2024-07-23 00:20:41.101215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:16:26.656 [2024-07-23 00:20:41.101232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:16:26.656 [2024-07-23 00:20:41.101243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:16:26.656 [2024-07-23 00:20:41.101258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:16:26.656 [2024-07-23 00:20:41.101278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:16:26.656 [2024-07-23 00:20:41.101292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:16:26.656 [2024-07-23 00:20:41.101303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:16:26.656 [2024-07-23 00:20:41.101316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:16:26.657 [2024-07-23 00:20:41.101327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:16:26.657 [2024-07-23 00:20:41.101340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:16:26.657 [2024-07-23 00:20:41.101351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:16:26.657 [2024-07-23 00:20:41.101364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:16:26.657 [2024-07-23 00:20:41.101374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:16:26.657 [2024-07-23 00:20:41.101388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:16:26.657 [2024-07-23 00:20:41.101399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:16:26.657 [2024-07-23 00:20:41.101412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:16:26.657 [2024-07-23 00:20:41.101422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:16:26.657 [2024-07-23 00:20:41.101438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:16:26.657 [2024-07-23 00:20:41.101448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:16:26.657 [2024-07-23 00:20:41.101461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:16:26.657 [2024-07-23 00:20:41.101472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:16:26.657 [2024-07-23 00:20:41.101485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:16:26.657 [2024-07-23 00:20:41.101495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:16:26.657 [2024-07-23 00:20:41.101508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:16:26.657 [2024-07-23 00:20:41.101518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:16:26.657 [2024-07-23 00:20:41.101531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:16:26.657 [2024-07-23 00:20:41.101541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:16:26.657 [2024-07-23 00:20:41.101554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:16:26.657 [2024-07-23 00:20:41.101564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:16:26.657 [2024-07-23 00:20:41.101577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:16:26.657 [2024-07-23 00:20:41.101590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:16:26.657 [2024-07-23 00:20:41.101607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:16:26.657 [2024-07-23 00:20:41.101618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:16:26.657 [2024-07-23 00:20:41.101634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:16:26.657 [2024-07-23 00:20:41.101645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:16:26.657 [2024-07-23 00:20:41.101657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:16:26.657 [2024-07-23 00:20:41.101668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:16:26.657 [2024-07-23 00:20:41.101680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:16:26.657 [2024-07-23 00:20:41.101691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:16:26.657 [2024-07-23 00:20:41.101704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:16:26.657 [2024-07-23 00:20:41.101715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:16:26.657 [2024-07-23 00:20:41.101727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:16:26.657 [2024-07-23 00:20:41.101738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:16:26.657 [2024-07-23 00:20:41.101751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:16:26.657 [2024-07-23 00:20:41.101761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:16:26.657 [2024-07-23 00:20:41.101774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:16:26.657 [2024-07-23 00:20:41.101785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:16:26.657 [2024-07-23 00:20:41.101797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:16:26.657 [2024-07-23 00:20:41.101807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:16:26.657 [2024-07-23 00:20:41.101823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:16:26.657 [2024-07-23 00:20:41.101834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:16:26.657 [2024-07-23 00:20:41.101847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:16:26.657 [2024-07-23 00:20:41.101857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:16:26.657 [2024-07-23 00:20:41.101870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:16:26.657 [2024-07-23 00:20:41.101880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:16:26.657 [2024-07-23 00:20:41.101893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:16:26.657 [2024-07-23 00:20:41.101904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:16:26.657 [2024-07-23 00:20:41.101918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:16:26.657 [2024-07-23 00:20:41.101928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:16:26.657 [2024-07-23 00:20:41.101941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:16:26.657 [2024-07-23 00:20:41.101952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:16:26.657 [2024-07-23 00:20:41.101964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:16:26.657 [2024-07-23 00:20:41.101976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:16:26.657 [2024-07-23 00:20:41.101991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:16:26.657 [2024-07-23 00:20:41.102002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:16:26.657 [2024-07-23 00:20:41.102017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:16:26.657 [2024-07-23 00:20:41.102027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:16:26.657 [2024-07-23 00:20:41.102040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:16:26.657 [2024-07-23 00:20:41.102051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:16:26.657 [2024-07-23 00:20:41.102064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:16:26.657 [2024-07-23 00:20:41.102074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:16:26.657 [2024-07-23 00:20:41.102087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:16:26.657 [2024-07-23 00:20:41.102097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:16:26.657 [2024-07-23 00:20:41.102110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:16:26.657 [2024-07-23 00:20:41.102120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:16:26.657 [2024-07-23 00:20:41.102134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:16:26.657 [2024-07-23 00:20:41.102144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:16:26.657 [2024-07-23 00:20:41.102157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:16:26.657 [2024-07-23 00:20:41.102168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:16:26.657 [2024-07-23 00:20:41.102181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:16:26.657 [2024-07-23 00:20:41.102191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:16:26.657 [2024-07-23 00:20:41.102206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:16:26.657 [2024-07-23 00:20:41.102217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:16:26.657 [2024-07-23 00:20:41.102230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:16:26.657 [2024-07-23 00:20:41.102241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:16:26.657 [2024-07-23 00:20:41.102254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:16:26.657 [2024-07-23 00:20:41.102275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:16:26.657 [2024-07-23 00:20:41.102288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:16:26.657 [2024-07-23 00:20:41.102298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:16:26.657 [2024-07-23 00:20:41.102311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:16:26.657 [2024-07-23 00:20:41.102322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:16:26.657 [2024-07-23 00:20:41.102335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:16:26.657 [2024-07-23 00:20:41.102346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:16:26.657 [2024-07-23 00:20:41.102377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:16:26.657 [2024-07-23 00:20:41.102389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:16:26.658 [2024-07-23 00:20:41.102403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:16:26.658 [2024-07-23 00:20:41.102413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:16:26.658 [2024-07-23 00:20:41.102429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:16:26.658 [2024-07-23 00:20:41.102447] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:16:26.658 [2024-07-23 00:20:41.102459] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 05fcf3be-df9c-41ef-ae3d-41db770ccbb0 00:16:26.658 [2024-07-23 00:20:41.102470] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:16:26.658 [2024-07-23 00:20:41.102482] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:16:26.658 [2024-07-23 00:20:41.102492] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:16:26.658 [2024-07-23 00:20:41.102507] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:16:26.658 [2024-07-23 00:20:41.102517] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:16:26.658 [2024-07-23 00:20:41.102529] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:16:26.658 [2024-07-23 00:20:41.102540] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:16:26.658 [2024-07-23 00:20:41.102552] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:16:26.658 [2024-07-23 00:20:41.102561] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:16:26.658 [2024-07-23 00:20:41.102574] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:26.658 [2024-07-23 00:20:41.102584] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:16:26.658 [2024-07-23 00:20:41.102597] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.404 ms 00:16:26.658 [2024-07-23 00:20:41.102607] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:26.658 [2024-07-23 00:20:41.104505] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:26.658 [2024-07-23 00:20:41.104529] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:16:26.658 [2024-07-23 00:20:41.104543] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.842 ms 00:16:26.658 [2024-07-23 00:20:41.104553] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:26.658 [2024-07-23 00:20:41.104718] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:26.658 [2024-07-23 00:20:41.104743] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:16:26.658 [2024-07-23 00:20:41.104757] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.085 ms 00:16:26.658 [2024-07-23 00:20:41.104767] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:26.658 [2024-07-23 00:20:41.111978] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:26.658 [2024-07-23 00:20:41.112011] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:26.658 [2024-07-23 00:20:41.112025] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:26.658 [2024-07-23 00:20:41.112048] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:26.658 [2024-07-23 00:20:41.112171] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:26.658 [2024-07-23 00:20:41.112183] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:26.658 [2024-07-23 00:20:41.112196] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:26.658 [2024-07-23 00:20:41.112205] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:26.658 [2024-07-23 00:20:41.112343] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:26.658 [2024-07-23 00:20:41.112357] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:26.658 [2024-07-23 00:20:41.112373] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:26.658 [2024-07-23 00:20:41.112384] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:26.658 [2024-07-23 00:20:41.112436] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:26.658 [2024-07-23 00:20:41.112446] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:26.658 [2024-07-23 00:20:41.112460] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:26.658 [2024-07-23 00:20:41.112470] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:26.658 [2024-07-23 00:20:41.125239] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:26.658 [2024-07-23 00:20:41.125299] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:26.658 [2024-07-23 00:20:41.125316] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:26.658 [2024-07-23 00:20:41.125330] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:26.658 [2024-07-23 00:20:41.133858] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:26.658 [2024-07-23 00:20:41.133898] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:26.658 [2024-07-23 00:20:41.133914] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:26.658 [2024-07-23 00:20:41.133924] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:26.658 [2024-07-23 00:20:41.134037] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:26.658 [2024-07-23 00:20:41.134049] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:26.658 [2024-07-23 00:20:41.134062] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:26.658 [2024-07-23 00:20:41.134076] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:26.658 [2024-07-23 00:20:41.134173] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:26.658 [2024-07-23 00:20:41.134184] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:26.658 [2024-07-23 00:20:41.134198] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:26.658 [2024-07-23 00:20:41.134208] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:26.658 [2024-07-23 00:20:41.134356] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:26.658 [2024-07-23 00:20:41.134371] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:26.658 [2024-07-23 00:20:41.134384] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:26.658 [2024-07-23 00:20:41.134394] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:26.658 [2024-07-23 00:20:41.134486] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:26.658 [2024-07-23 00:20:41.134499] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:16:26.658 [2024-07-23 00:20:41.134512] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:26.658 [2024-07-23 00:20:41.134521] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:26.658 [2024-07-23 00:20:41.134605] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:26.658 [2024-07-23 00:20:41.134616] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:26.658 [2024-07-23 00:20:41.134644] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:26.658 [2024-07-23 00:20:41.134654] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:26.658 [2024-07-23 00:20:41.134748] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:26.658 [2024-07-23 00:20:41.134759] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:26.658 [2024-07-23 00:20:41.134772] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:26.658 [2024-07-23 00:20:41.134783] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:26.658 [2024-07-23 00:20:41.135101] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 58.198 ms, result 0 00:16:26.658 true 00:16:26.658 00:20:41 ftl.ftl_trim -- ftl/trim.sh@63 -- # killprocess 88724 00:16:26.658 00:20:41 ftl.ftl_trim -- common/autotest_common.sh@946 -- # '[' -z 88724 ']' 00:16:26.658 00:20:41 ftl.ftl_trim -- common/autotest_common.sh@950 -- # kill -0 88724 00:16:26.658 00:20:41 ftl.ftl_trim -- common/autotest_common.sh@951 -- # uname 00:16:26.658 00:20:41 ftl.ftl_trim -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:16:26.658 00:20:41 ftl.ftl_trim -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 88724 00:16:26.658 killing process with pid 88724 00:16:26.658 00:20:41 ftl.ftl_trim -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:16:26.658 00:20:41 ftl.ftl_trim -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:16:26.658 00:20:41 ftl.ftl_trim -- common/autotest_common.sh@964 -- # echo 'killing process with pid 88724' 00:16:26.658 00:20:41 ftl.ftl_trim -- common/autotest_common.sh@965 -- # kill 88724 00:16:26.658 00:20:41 ftl.ftl_trim -- common/autotest_common.sh@970 -- # wait 88724 00:16:29.947 00:20:44 ftl.ftl_trim -- ftl/trim.sh@66 -- # dd if=/dev/urandom bs=4K count=65536 00:16:30.516 65536+0 records in 00:16:30.516 65536+0 records out 00:16:30.516 268435456 bytes (268 MB, 256 MiB) copied, 0.946845 s, 284 MB/s 00:16:30.516 00:20:45 ftl.ftl_trim -- ftl/trim.sh@69 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:16:30.775 [2024-07-23 00:20:45.259414] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:16:30.775 [2024-07-23 00:20:45.259537] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88902 ] 00:16:30.775 [2024-07-23 00:20:45.401165] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:30.775 [2024-07-23 00:20:45.442995] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:31.036 [2024-07-23 00:20:45.544518] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:16:31.036 [2024-07-23 00:20:45.544592] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:16:31.036 [2024-07-23 00:20:45.696151] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:31.036 [2024-07-23 00:20:45.696208] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:16:31.036 [2024-07-23 00:20:45.696224] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:16:31.036 [2024-07-23 00:20:45.696250] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:31.036 [2024-07-23 00:20:45.698692] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:31.036 [2024-07-23 00:20:45.698732] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:31.036 [2024-07-23 00:20:45.698752] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.424 ms 00:16:31.036 [2024-07-23 00:20:45.698779] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:31.036 [2024-07-23 00:20:45.698866] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:16:31.036 [2024-07-23 00:20:45.699085] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:16:31.036 [2024-07-23 00:20:45.699103] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:31.036 [2024-07-23 00:20:45.699114] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:31.036 [2024-07-23 00:20:45.699128] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.252 ms 00:16:31.036 [2024-07-23 00:20:45.699138] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:31.036 [2024-07-23 00:20:45.700611] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:16:31.036 [2024-07-23 00:20:45.703098] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:31.036 [2024-07-23 00:20:45.703133] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:16:31.036 [2024-07-23 00:20:45.703146] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.492 ms 00:16:31.036 [2024-07-23 00:20:45.703156] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:31.036 [2024-07-23 00:20:45.703223] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:31.036 [2024-07-23 00:20:45.703236] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:16:31.036 [2024-07-23 00:20:45.703247] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:16:31.036 [2024-07-23 00:20:45.703274] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:31.036 [2024-07-23 00:20:45.709952] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:31.036 [2024-07-23 00:20:45.709980] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:31.036 [2024-07-23 00:20:45.709992] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.637 ms 00:16:31.036 [2024-07-23 00:20:45.710002] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:31.036 [2024-07-23 00:20:45.710128] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:31.036 [2024-07-23 00:20:45.710143] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:31.036 [2024-07-23 00:20:45.710154] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.073 ms 00:16:31.036 [2024-07-23 00:20:45.710173] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:31.036 [2024-07-23 00:20:45.710205] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:31.036 [2024-07-23 00:20:45.710219] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:16:31.036 [2024-07-23 00:20:45.710229] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:16:31.036 [2024-07-23 00:20:45.710238] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:31.036 [2024-07-23 00:20:45.710276] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:16:31.036 [2024-07-23 00:20:45.711888] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:31.036 [2024-07-23 00:20:45.711913] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:31.036 [2024-07-23 00:20:45.711929] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.635 ms 00:16:31.036 [2024-07-23 00:20:45.711939] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:31.036 [2024-07-23 00:20:45.711986] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:31.036 [2024-07-23 00:20:45.711998] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:16:31.036 [2024-07-23 00:20:45.712008] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:16:31.036 [2024-07-23 00:20:45.712018] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:31.036 [2024-07-23 00:20:45.712038] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:16:31.036 [2024-07-23 00:20:45.712061] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:16:31.036 [2024-07-23 00:20:45.712104] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:16:31.036 [2024-07-23 00:20:45.712131] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:16:31.036 [2024-07-23 00:20:45.712215] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:16:31.036 [2024-07-23 00:20:45.712228] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:16:31.036 [2024-07-23 00:20:45.712241] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:16:31.036 [2024-07-23 00:20:45.712253] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:16:31.036 [2024-07-23 00:20:45.712287] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:16:31.036 [2024-07-23 00:20:45.712306] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:16:31.036 [2024-07-23 00:20:45.712316] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:16:31.036 [2024-07-23 00:20:45.712326] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:16:31.036 [2024-07-23 00:20:45.712339] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:16:31.036 [2024-07-23 00:20:45.712349] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:31.036 [2024-07-23 00:20:45.712359] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:16:31.036 [2024-07-23 00:20:45.712369] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.314 ms 00:16:31.036 [2024-07-23 00:20:45.712385] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:31.036 [2024-07-23 00:20:45.712464] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:31.036 [2024-07-23 00:20:45.712475] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:16:31.036 [2024-07-23 00:20:45.712485] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:16:31.036 [2024-07-23 00:20:45.712494] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:31.036 [2024-07-23 00:20:45.712581] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:16:31.036 [2024-07-23 00:20:45.712594] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:16:31.036 [2024-07-23 00:20:45.712605] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:31.036 [2024-07-23 00:20:45.712622] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:31.036 [2024-07-23 00:20:45.712632] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:16:31.036 [2024-07-23 00:20:45.712641] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:16:31.036 [2024-07-23 00:20:45.712651] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:16:31.036 [2024-07-23 00:20:45.712660] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:16:31.036 [2024-07-23 00:20:45.712670] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:16:31.036 [2024-07-23 00:20:45.712680] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:31.036 [2024-07-23 00:20:45.712689] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:16:31.036 [2024-07-23 00:20:45.712698] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:16:31.036 [2024-07-23 00:20:45.712712] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:31.036 [2024-07-23 00:20:45.712723] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:16:31.036 [2024-07-23 00:20:45.712733] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:16:31.036 [2024-07-23 00:20:45.712742] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:31.036 [2024-07-23 00:20:45.712751] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:16:31.036 [2024-07-23 00:20:45.712761] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:16:31.036 [2024-07-23 00:20:45.712770] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:31.036 [2024-07-23 00:20:45.712779] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:16:31.036 [2024-07-23 00:20:45.712788] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:16:31.036 [2024-07-23 00:20:45.712797] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:31.036 [2024-07-23 00:20:45.712806] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:16:31.036 [2024-07-23 00:20:45.712815] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:16:31.036 [2024-07-23 00:20:45.712824] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:31.036 [2024-07-23 00:20:45.712833] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:16:31.036 [2024-07-23 00:20:45.712843] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:16:31.036 [2024-07-23 00:20:45.712852] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:31.036 [2024-07-23 00:20:45.712867] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:16:31.036 [2024-07-23 00:20:45.712876] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:16:31.036 [2024-07-23 00:20:45.712884] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:31.036 [2024-07-23 00:20:45.712893] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:16:31.036 [2024-07-23 00:20:45.712902] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:16:31.036 [2024-07-23 00:20:45.712911] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:31.036 [2024-07-23 00:20:45.712920] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:16:31.037 [2024-07-23 00:20:45.712929] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:16:31.037 [2024-07-23 00:20:45.712938] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:31.037 [2024-07-23 00:20:45.712947] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:16:31.037 [2024-07-23 00:20:45.712956] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:16:31.037 [2024-07-23 00:20:45.712964] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:31.037 [2024-07-23 00:20:45.712973] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:16:31.037 [2024-07-23 00:20:45.712982] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:16:31.037 [2024-07-23 00:20:45.712991] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:31.037 [2024-07-23 00:20:45.713000] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:16:31.037 [2024-07-23 00:20:45.713013] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:16:31.037 [2024-07-23 00:20:45.713023] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:31.037 [2024-07-23 00:20:45.713033] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:31.037 [2024-07-23 00:20:45.713043] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:16:31.037 [2024-07-23 00:20:45.713052] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:16:31.037 [2024-07-23 00:20:45.713061] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:16:31.037 [2024-07-23 00:20:45.713079] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:16:31.037 [2024-07-23 00:20:45.713088] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:16:31.037 [2024-07-23 00:20:45.713097] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:16:31.037 [2024-07-23 00:20:45.713108] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:16:31.037 [2024-07-23 00:20:45.713120] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:31.037 [2024-07-23 00:20:45.713132] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:16:31.037 [2024-07-23 00:20:45.713142] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:16:31.037 [2024-07-23 00:20:45.713153] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:16:31.037 [2024-07-23 00:20:45.713163] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:16:31.037 [2024-07-23 00:20:45.713174] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:16:31.037 [2024-07-23 00:20:45.713186] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:16:31.037 [2024-07-23 00:20:45.713197] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:16:31.037 [2024-07-23 00:20:45.713207] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:16:31.037 [2024-07-23 00:20:45.713217] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:16:31.037 [2024-07-23 00:20:45.713227] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:16:31.037 [2024-07-23 00:20:45.713237] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:16:31.037 [2024-07-23 00:20:45.713248] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:16:31.037 [2024-07-23 00:20:45.713258] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:16:31.037 [2024-07-23 00:20:45.713546] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:16:31.037 [2024-07-23 00:20:45.713598] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:16:31.037 [2024-07-23 00:20:45.713652] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:31.037 [2024-07-23 00:20:45.713715] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:16:31.037 [2024-07-23 00:20:45.713761] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:16:31.037 [2024-07-23 00:20:45.713863] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:16:31.037 [2024-07-23 00:20:45.713913] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:16:31.037 [2024-07-23 00:20:45.713963] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:31.037 [2024-07-23 00:20:45.714005] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:16:31.037 [2024-07-23 00:20:45.714037] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.434 ms 00:16:31.037 [2024-07-23 00:20:45.714066] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:31.297 [2024-07-23 00:20:45.734661] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:31.297 [2024-07-23 00:20:45.734808] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:31.297 [2024-07-23 00:20:45.734892] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.406 ms 00:16:31.297 [2024-07-23 00:20:45.734943] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:31.297 [2024-07-23 00:20:45.735086] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:31.297 [2024-07-23 00:20:45.735228] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:16:31.297 [2024-07-23 00:20:45.735313] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:16:31.297 [2024-07-23 00:20:45.735367] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:31.297 [2024-07-23 00:20:45.746572] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:31.297 [2024-07-23 00:20:45.746722] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:31.297 [2024-07-23 00:20:45.746837] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.161 ms 00:16:31.297 [2024-07-23 00:20:45.746887] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:31.297 [2024-07-23 00:20:45.746975] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:31.297 [2024-07-23 00:20:45.747009] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:31.297 [2024-07-23 00:20:45.747094] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:16:31.297 [2024-07-23 00:20:45.747110] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:31.297 [2024-07-23 00:20:45.747569] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:31.297 [2024-07-23 00:20:45.747587] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:31.297 [2024-07-23 00:20:45.747598] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.431 ms 00:16:31.297 [2024-07-23 00:20:45.747608] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:31.297 [2024-07-23 00:20:45.747729] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:31.297 [2024-07-23 00:20:45.747742] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:31.297 [2024-07-23 00:20:45.747759] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.097 ms 00:16:31.297 [2024-07-23 00:20:45.747769] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:31.297 [2024-07-23 00:20:45.754029] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:31.297 [2024-07-23 00:20:45.754063] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:31.297 [2024-07-23 00:20:45.754076] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.246 ms 00:16:31.297 [2024-07-23 00:20:45.754087] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:31.297 [2024-07-23 00:20:45.756704] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:16:31.297 [2024-07-23 00:20:45.756740] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:16:31.297 [2024-07-23 00:20:45.756755] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:31.297 [2024-07-23 00:20:45.756768] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:16:31.297 [2024-07-23 00:20:45.756780] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.571 ms 00:16:31.297 [2024-07-23 00:20:45.756790] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:31.297 [2024-07-23 00:20:45.769583] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:31.297 [2024-07-23 00:20:45.769621] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:16:31.297 [2024-07-23 00:20:45.769634] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.767 ms 00:16:31.297 [2024-07-23 00:20:45.769659] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:31.297 [2024-07-23 00:20:45.771474] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:31.297 [2024-07-23 00:20:45.771507] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:16:31.297 [2024-07-23 00:20:45.771519] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.732 ms 00:16:31.297 [2024-07-23 00:20:45.771528] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:31.297 [2024-07-23 00:20:45.773104] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:31.297 [2024-07-23 00:20:45.773137] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:16:31.297 [2024-07-23 00:20:45.773148] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.533 ms 00:16:31.297 [2024-07-23 00:20:45.773159] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:31.297 [2024-07-23 00:20:45.773458] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:31.297 [2024-07-23 00:20:45.773484] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:16:31.297 [2024-07-23 00:20:45.773496] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.231 ms 00:16:31.297 [2024-07-23 00:20:45.773506] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:31.297 [2024-07-23 00:20:45.794308] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:31.297 [2024-07-23 00:20:45.794373] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:16:31.297 [2024-07-23 00:20:45.794389] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.807 ms 00:16:31.297 [2024-07-23 00:20:45.794411] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:31.297 [2024-07-23 00:20:45.800729] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:16:31.297 [2024-07-23 00:20:45.817392] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:31.298 [2024-07-23 00:20:45.817450] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:16:31.298 [2024-07-23 00:20:45.817479] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.924 ms 00:16:31.298 [2024-07-23 00:20:45.817490] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:31.298 [2024-07-23 00:20:45.817602] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:31.298 [2024-07-23 00:20:45.817615] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:16:31.298 [2024-07-23 00:20:45.817627] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:16:31.298 [2024-07-23 00:20:45.817642] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:31.298 [2024-07-23 00:20:45.817700] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:31.298 [2024-07-23 00:20:45.817711] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:16:31.298 [2024-07-23 00:20:45.817722] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:16:31.298 [2024-07-23 00:20:45.817731] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:31.298 [2024-07-23 00:20:45.817755] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:31.298 [2024-07-23 00:20:45.817766] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:16:31.298 [2024-07-23 00:20:45.817776] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:16:31.298 [2024-07-23 00:20:45.817797] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:31.298 [2024-07-23 00:20:45.817835] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:16:31.298 [2024-07-23 00:20:45.817847] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:31.298 [2024-07-23 00:20:45.817857] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:16:31.298 [2024-07-23 00:20:45.817875] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:16:31.298 [2024-07-23 00:20:45.817885] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:31.298 [2024-07-23 00:20:45.821759] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:31.298 [2024-07-23 00:20:45.821795] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:16:31.298 [2024-07-23 00:20:45.821809] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.851 ms 00:16:31.298 [2024-07-23 00:20:45.821819] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:31.298 [2024-07-23 00:20:45.821921] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:31.298 [2024-07-23 00:20:45.821936] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:16:31.298 [2024-07-23 00:20:45.821947] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:16:31.298 [2024-07-23 00:20:45.821957] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:31.298 [2024-07-23 00:20:45.822899] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:16:31.298 [2024-07-23 00:20:45.823858] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 126.685 ms, result 0 00:16:31.298 [2024-07-23 00:20:45.824614] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:16:31.298 [2024-07-23 00:20:45.834432] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:16:41.547  Copying: 25/256 [MB] (25 MBps) Copying: 50/256 [MB] (25 MBps) Copying: 75/256 [MB] (25 MBps) Copying: 101/256 [MB] (25 MBps) Copying: 127/256 [MB] (25 MBps) Copying: 152/256 [MB] (24 MBps) Copying: 177/256 [MB] (25 MBps) Copying: 202/256 [MB] (24 MBps) Copying: 226/256 [MB] (24 MBps) Copying: 251/256 [MB] (24 MBps) Copying: 256/256 [MB] (average 25 MBps)[2024-07-23 00:20:56.005941] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:16:41.547 [2024-07-23 00:20:56.007460] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:41.547 [2024-07-23 00:20:56.007616] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:16:41.547 [2024-07-23 00:20:56.007715] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:16:41.547 [2024-07-23 00:20:56.007752] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:41.547 [2024-07-23 00:20:56.007803] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:16:41.547 [2024-07-23 00:20:56.008656] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:41.547 [2024-07-23 00:20:56.008770] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:16:41.547 [2024-07-23 00:20:56.008852] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.652 ms 00:16:41.547 [2024-07-23 00:20:56.008886] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:41.547 [2024-07-23 00:20:56.010635] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:41.547 [2024-07-23 00:20:56.010779] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:16:41.547 [2024-07-23 00:20:56.010810] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.703 ms 00:16:41.547 [2024-07-23 00:20:56.010824] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:41.547 [2024-07-23 00:20:56.017617] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:41.547 [2024-07-23 00:20:56.017759] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:16:41.547 [2024-07-23 00:20:56.017838] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.778 ms 00:16:41.547 [2024-07-23 00:20:56.017873] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:41.547 [2024-07-23 00:20:56.023623] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:41.547 [2024-07-23 00:20:56.023745] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:16:41.547 [2024-07-23 00:20:56.023831] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.670 ms 00:16:41.547 [2024-07-23 00:20:56.023852] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:41.547 [2024-07-23 00:20:56.025282] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:41.547 [2024-07-23 00:20:56.025317] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:16:41.547 [2024-07-23 00:20:56.025329] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.374 ms 00:16:41.547 [2024-07-23 00:20:56.025338] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:41.547 [2024-07-23 00:20:56.029111] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:41.547 [2024-07-23 00:20:56.029149] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:16:41.547 [2024-07-23 00:20:56.029161] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.747 ms 00:16:41.547 [2024-07-23 00:20:56.029181] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:41.547 [2024-07-23 00:20:56.029303] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:41.547 [2024-07-23 00:20:56.029324] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:16:41.547 [2024-07-23 00:20:56.029335] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.091 ms 00:16:41.547 [2024-07-23 00:20:56.029349] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:41.547 [2024-07-23 00:20:56.031279] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:41.547 [2024-07-23 00:20:56.031327] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:16:41.547 [2024-07-23 00:20:56.031339] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.915 ms 00:16:41.547 [2024-07-23 00:20:56.031349] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:41.547 [2024-07-23 00:20:56.032772] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:41.547 [2024-07-23 00:20:56.032806] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:16:41.547 [2024-07-23 00:20:56.032816] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.394 ms 00:16:41.547 [2024-07-23 00:20:56.032826] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:41.547 [2024-07-23 00:20:56.034005] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:41.547 [2024-07-23 00:20:56.034041] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:16:41.547 [2024-07-23 00:20:56.034053] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.153 ms 00:16:41.547 [2024-07-23 00:20:56.034062] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:41.547 [2024-07-23 00:20:56.035165] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:41.547 [2024-07-23 00:20:56.035199] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:16:41.547 [2024-07-23 00:20:56.035210] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.036 ms 00:16:41.548 [2024-07-23 00:20:56.035219] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:41.548 [2024-07-23 00:20:56.035246] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:16:41.548 [2024-07-23 00:20:56.035276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:16:41.548 [2024-07-23 00:20:56.035289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:16:41.548 [2024-07-23 00:20:56.035301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:16:41.548 [2024-07-23 00:20:56.035312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:16:41.548 [2024-07-23 00:20:56.035322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:16:41.548 [2024-07-23 00:20:56.035333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:16:41.548 [2024-07-23 00:20:56.035343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:16:41.548 [2024-07-23 00:20:56.035353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:16:41.548 [2024-07-23 00:20:56.035363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:16:41.548 [2024-07-23 00:20:56.035374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:16:41.548 [2024-07-23 00:20:56.035384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:16:41.548 [2024-07-23 00:20:56.035395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:16:41.548 [2024-07-23 00:20:56.035405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:16:41.548 [2024-07-23 00:20:56.035416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:16:41.548 [2024-07-23 00:20:56.035426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:16:41.548 [2024-07-23 00:20:56.035436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:16:41.548 [2024-07-23 00:20:56.035446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:16:41.548 [2024-07-23 00:20:56.035457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:16:41.548 [2024-07-23 00:20:56.035467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:16:41.548 [2024-07-23 00:20:56.035477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:16:41.548 [2024-07-23 00:20:56.035487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:16:41.548 [2024-07-23 00:20:56.035497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:16:41.548 [2024-07-23 00:20:56.035508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:16:41.548 [2024-07-23 00:20:56.035519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:16:41.548 [2024-07-23 00:20:56.035529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:16:41.548 [2024-07-23 00:20:56.035539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:16:41.548 [2024-07-23 00:20:56.035551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:16:41.548 [2024-07-23 00:20:56.035561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:16:41.548 [2024-07-23 00:20:56.035571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:16:41.548 [2024-07-23 00:20:56.035582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:16:41.548 [2024-07-23 00:20:56.035592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:16:41.548 [2024-07-23 00:20:56.035604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:16:41.548 [2024-07-23 00:20:56.035616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:16:41.548 [2024-07-23 00:20:56.035626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:16:41.548 [2024-07-23 00:20:56.035637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:16:41.548 [2024-07-23 00:20:56.035647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:16:41.548 [2024-07-23 00:20:56.035657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:16:41.548 [2024-07-23 00:20:56.035668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:16:41.548 [2024-07-23 00:20:56.035678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:16:41.548 [2024-07-23 00:20:56.035689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:16:41.548 [2024-07-23 00:20:56.035699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:16:41.548 [2024-07-23 00:20:56.035709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:16:41.548 [2024-07-23 00:20:56.035719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:16:41.548 [2024-07-23 00:20:56.035742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:16:41.548 [2024-07-23 00:20:56.035752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:16:41.548 [2024-07-23 00:20:56.035763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:16:41.548 [2024-07-23 00:20:56.035773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:16:41.548 [2024-07-23 00:20:56.035784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:16:41.548 [2024-07-23 00:20:56.035795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:16:41.548 [2024-07-23 00:20:56.035806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:16:41.548 [2024-07-23 00:20:56.035816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:16:41.548 [2024-07-23 00:20:56.035827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:16:41.548 [2024-07-23 00:20:56.035837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:16:41.548 [2024-07-23 00:20:56.035847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:16:41.548 [2024-07-23 00:20:56.035858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:16:41.548 [2024-07-23 00:20:56.035869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:16:41.548 [2024-07-23 00:20:56.035879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:16:41.548 [2024-07-23 00:20:56.035889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:16:41.548 [2024-07-23 00:20:56.035900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:16:41.548 [2024-07-23 00:20:56.035910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:16:41.548 [2024-07-23 00:20:56.035920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:16:41.548 [2024-07-23 00:20:56.035930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:16:41.548 [2024-07-23 00:20:56.035940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:16:41.548 [2024-07-23 00:20:56.035953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:16:41.548 [2024-07-23 00:20:56.035963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:16:41.548 [2024-07-23 00:20:56.035974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:16:41.548 [2024-07-23 00:20:56.035984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:16:41.548 [2024-07-23 00:20:56.035995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:16:41.548 [2024-07-23 00:20:56.036005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:16:41.548 [2024-07-23 00:20:56.036015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:16:41.548 [2024-07-23 00:20:56.036025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:16:41.548 [2024-07-23 00:20:56.036035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:16:41.548 [2024-07-23 00:20:56.036045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:16:41.548 [2024-07-23 00:20:56.036056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:16:41.548 [2024-07-23 00:20:56.036067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:16:41.548 [2024-07-23 00:20:56.036078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:16:41.548 [2024-07-23 00:20:56.036088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:16:41.548 [2024-07-23 00:20:56.036098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:16:41.548 [2024-07-23 00:20:56.036108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:16:41.548 [2024-07-23 00:20:56.036118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:16:41.548 [2024-07-23 00:20:56.036129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:16:41.548 [2024-07-23 00:20:56.036139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:16:41.548 [2024-07-23 00:20:56.036149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:16:41.548 [2024-07-23 00:20:56.036160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:16:41.548 [2024-07-23 00:20:56.036170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:16:41.548 [2024-07-23 00:20:56.036181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:16:41.549 [2024-07-23 00:20:56.036191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:16:41.549 [2024-07-23 00:20:56.036201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:16:41.549 [2024-07-23 00:20:56.036222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:16:41.549 [2024-07-23 00:20:56.036233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:16:41.549 [2024-07-23 00:20:56.036243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:16:41.549 [2024-07-23 00:20:56.036254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:16:41.549 [2024-07-23 00:20:56.036264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:16:41.549 [2024-07-23 00:20:56.036283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:16:41.549 [2024-07-23 00:20:56.036293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:16:41.549 [2024-07-23 00:20:56.036304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:16:41.549 [2024-07-23 00:20:56.036315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:16:41.549 [2024-07-23 00:20:56.036325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:16:41.549 [2024-07-23 00:20:56.036335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:16:41.549 [2024-07-23 00:20:56.036346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:16:41.549 [2024-07-23 00:20:56.036363] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:16:41.549 [2024-07-23 00:20:56.036388] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 05fcf3be-df9c-41ef-ae3d-41db770ccbb0 00:16:41.549 [2024-07-23 00:20:56.036399] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:16:41.549 [2024-07-23 00:20:56.036409] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:16:41.549 [2024-07-23 00:20:56.036418] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:16:41.549 [2024-07-23 00:20:56.036427] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:16:41.549 [2024-07-23 00:20:56.036436] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:16:41.549 [2024-07-23 00:20:56.036446] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:16:41.549 [2024-07-23 00:20:56.036460] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:16:41.549 [2024-07-23 00:20:56.036468] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:16:41.549 [2024-07-23 00:20:56.036477] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:16:41.549 [2024-07-23 00:20:56.036487] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:41.549 [2024-07-23 00:20:56.036496] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:16:41.549 [2024-07-23 00:20:56.036513] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.243 ms 00:16:41.549 [2024-07-23 00:20:56.036522] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:41.549 [2024-07-23 00:20:56.038268] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:41.549 [2024-07-23 00:20:56.038290] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:16:41.549 [2024-07-23 00:20:56.038301] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.717 ms 00:16:41.549 [2024-07-23 00:20:56.038311] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:41.549 [2024-07-23 00:20:56.038420] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:41.549 [2024-07-23 00:20:56.038430] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:16:41.549 [2024-07-23 00:20:56.038441] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.084 ms 00:16:41.549 [2024-07-23 00:20:56.038450] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:41.549 [2024-07-23 00:20:56.044706] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:41.549 [2024-07-23 00:20:56.044729] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:41.549 [2024-07-23 00:20:56.044740] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:41.549 [2024-07-23 00:20:56.044755] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:41.549 [2024-07-23 00:20:56.044820] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:41.549 [2024-07-23 00:20:56.044832] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:41.549 [2024-07-23 00:20:56.044842] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:41.549 [2024-07-23 00:20:56.044853] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:41.549 [2024-07-23 00:20:56.044896] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:41.549 [2024-07-23 00:20:56.044908] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:41.549 [2024-07-23 00:20:56.044918] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:41.549 [2024-07-23 00:20:56.044928] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:41.549 [2024-07-23 00:20:56.044951] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:41.549 [2024-07-23 00:20:56.044961] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:41.549 [2024-07-23 00:20:56.044971] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:41.549 [2024-07-23 00:20:56.044980] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:41.549 [2024-07-23 00:20:56.056577] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:41.549 [2024-07-23 00:20:56.056621] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:41.549 [2024-07-23 00:20:56.056650] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:41.549 [2024-07-23 00:20:56.056661] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:41.549 [2024-07-23 00:20:56.064933] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:41.549 [2024-07-23 00:20:56.064978] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:41.549 [2024-07-23 00:20:56.064991] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:41.549 [2024-07-23 00:20:56.065018] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:41.549 [2024-07-23 00:20:56.065045] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:41.549 [2024-07-23 00:20:56.065056] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:41.549 [2024-07-23 00:20:56.065073] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:41.549 [2024-07-23 00:20:56.065091] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:41.549 [2024-07-23 00:20:56.065121] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:41.549 [2024-07-23 00:20:56.065135] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:41.549 [2024-07-23 00:20:56.065146] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:41.549 [2024-07-23 00:20:56.065156] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:41.549 [2024-07-23 00:20:56.065231] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:41.549 [2024-07-23 00:20:56.065243] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:41.549 [2024-07-23 00:20:56.065253] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:41.549 [2024-07-23 00:20:56.065278] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:41.549 [2024-07-23 00:20:56.065315] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:41.549 [2024-07-23 00:20:56.065327] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:16:41.549 [2024-07-23 00:20:56.065341] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:41.549 [2024-07-23 00:20:56.065350] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:41.549 [2024-07-23 00:20:56.065392] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:41.549 [2024-07-23 00:20:56.065403] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:41.549 [2024-07-23 00:20:56.065413] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:41.549 [2024-07-23 00:20:56.065423] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:41.549 [2024-07-23 00:20:56.065480] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:41.549 [2024-07-23 00:20:56.065496] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:41.549 [2024-07-23 00:20:56.065507] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:41.549 [2024-07-23 00:20:56.065517] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:41.549 [2024-07-23 00:20:56.065663] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 58.272 ms, result 0 00:16:41.809 00:16:41.809 00:16:42.068 00:20:56 ftl.ftl_trim -- ftl/trim.sh@71 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:16:42.068 00:20:56 ftl.ftl_trim -- ftl/trim.sh@72 -- # svcpid=89025 00:16:42.068 00:20:56 ftl.ftl_trim -- ftl/trim.sh@73 -- # waitforlisten 89025 00:16:42.068 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:42.068 00:20:56 ftl.ftl_trim -- common/autotest_common.sh@827 -- # '[' -z 89025 ']' 00:16:42.068 00:20:56 ftl.ftl_trim -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:42.068 00:20:56 ftl.ftl_trim -- common/autotest_common.sh@832 -- # local max_retries=100 00:16:42.068 00:20:56 ftl.ftl_trim -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:42.068 00:20:56 ftl.ftl_trim -- common/autotest_common.sh@836 -- # xtrace_disable 00:16:42.068 00:20:56 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:16:42.068 [2024-07-23 00:20:56.586720] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:16:42.068 [2024-07-23 00:20:56.586841] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89025 ] 00:16:42.068 [2024-07-23 00:20:56.735897] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:42.327 [2024-07-23 00:20:56.777839] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:42.925 00:20:57 ftl.ftl_trim -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:16:42.925 00:20:57 ftl.ftl_trim -- common/autotest_common.sh@860 -- # return 0 00:16:42.925 00:20:57 ftl.ftl_trim -- ftl/trim.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:16:42.925 [2024-07-23 00:20:57.567550] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:16:42.925 [2024-07-23 00:20:57.567613] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:16:43.187 [2024-07-23 00:20:57.735286] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:43.187 [2024-07-23 00:20:57.735336] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:16:43.187 [2024-07-23 00:20:57.735354] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:16:43.187 [2024-07-23 00:20:57.735364] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.187 [2024-07-23 00:20:57.737718] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:43.187 [2024-07-23 00:20:57.737759] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:43.187 [2024-07-23 00:20:57.737776] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.335 ms 00:16:43.187 [2024-07-23 00:20:57.737786] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.187 [2024-07-23 00:20:57.737873] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:16:43.187 [2024-07-23 00:20:57.738091] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:16:43.187 [2024-07-23 00:20:57.738118] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:43.187 [2024-07-23 00:20:57.738128] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:43.187 [2024-07-23 00:20:57.738142] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.258 ms 00:16:43.187 [2024-07-23 00:20:57.738151] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.187 [2024-07-23 00:20:57.739624] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:16:43.187 [2024-07-23 00:20:57.742206] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:43.187 [2024-07-23 00:20:57.742248] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:16:43.187 [2024-07-23 00:20:57.742269] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.591 ms 00:16:43.187 [2024-07-23 00:20:57.742283] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.187 [2024-07-23 00:20:57.742346] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:43.187 [2024-07-23 00:20:57.742362] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:16:43.187 [2024-07-23 00:20:57.742382] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:16:43.187 [2024-07-23 00:20:57.742397] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.187 [2024-07-23 00:20:57.749140] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:43.187 [2024-07-23 00:20:57.749170] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:43.187 [2024-07-23 00:20:57.749205] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.707 ms 00:16:43.187 [2024-07-23 00:20:57.749218] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.187 [2024-07-23 00:20:57.749347] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:43.187 [2024-07-23 00:20:57.749365] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:43.187 [2024-07-23 00:20:57.749376] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.083 ms 00:16:43.187 [2024-07-23 00:20:57.749388] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.187 [2024-07-23 00:20:57.749423] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:43.187 [2024-07-23 00:20:57.749436] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:16:43.187 [2024-07-23 00:20:57.749447] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:16:43.187 [2024-07-23 00:20:57.749458] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.187 [2024-07-23 00:20:57.749484] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:16:43.187 [2024-07-23 00:20:57.751078] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:43.187 [2024-07-23 00:20:57.751104] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:43.187 [2024-07-23 00:20:57.751121] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.600 ms 00:16:43.187 [2024-07-23 00:20:57.751133] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.187 [2024-07-23 00:20:57.751180] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:43.187 [2024-07-23 00:20:57.751191] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:16:43.187 [2024-07-23 00:20:57.751212] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:16:43.187 [2024-07-23 00:20:57.751222] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.187 [2024-07-23 00:20:57.751247] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:16:43.187 [2024-07-23 00:20:57.751303] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:16:43.187 [2024-07-23 00:20:57.751346] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:16:43.187 [2024-07-23 00:20:57.751366] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:16:43.187 [2024-07-23 00:20:57.751452] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:16:43.187 [2024-07-23 00:20:57.751465] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:16:43.187 [2024-07-23 00:20:57.751484] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:16:43.187 [2024-07-23 00:20:57.751497] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:16:43.187 [2024-07-23 00:20:57.751511] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:16:43.187 [2024-07-23 00:20:57.751533] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:16:43.187 [2024-07-23 00:20:57.751548] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:16:43.187 [2024-07-23 00:20:57.751558] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:16:43.187 [2024-07-23 00:20:57.751570] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:16:43.187 [2024-07-23 00:20:57.751583] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:43.187 [2024-07-23 00:20:57.751595] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:16:43.187 [2024-07-23 00:20:57.751605] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.341 ms 00:16:43.187 [2024-07-23 00:20:57.751617] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.187 [2024-07-23 00:20:57.751689] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:43.187 [2024-07-23 00:20:57.751701] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:16:43.187 [2024-07-23 00:20:57.751712] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:16:43.187 [2024-07-23 00:20:57.751723] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.187 [2024-07-23 00:20:57.751806] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:16:43.187 [2024-07-23 00:20:57.751825] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:16:43.187 [2024-07-23 00:20:57.751835] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:43.187 [2024-07-23 00:20:57.751847] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:43.187 [2024-07-23 00:20:57.751857] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:16:43.187 [2024-07-23 00:20:57.751871] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:16:43.187 [2024-07-23 00:20:57.751881] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:16:43.187 [2024-07-23 00:20:57.751892] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:16:43.187 [2024-07-23 00:20:57.751901] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:16:43.187 [2024-07-23 00:20:57.751913] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:43.187 [2024-07-23 00:20:57.751922] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:16:43.187 [2024-07-23 00:20:57.751933] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:16:43.187 [2024-07-23 00:20:57.751943] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:43.187 [2024-07-23 00:20:57.751955] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:16:43.187 [2024-07-23 00:20:57.751965] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:16:43.187 [2024-07-23 00:20:57.751977] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:43.187 [2024-07-23 00:20:57.751985] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:16:43.187 [2024-07-23 00:20:57.751997] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:16:43.187 [2024-07-23 00:20:57.752005] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:43.187 [2024-07-23 00:20:57.752016] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:16:43.187 [2024-07-23 00:20:57.752025] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:16:43.187 [2024-07-23 00:20:57.752039] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:43.187 [2024-07-23 00:20:57.752048] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:16:43.187 [2024-07-23 00:20:57.752059] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:16:43.187 [2024-07-23 00:20:57.752068] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:43.187 [2024-07-23 00:20:57.752079] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:16:43.187 [2024-07-23 00:20:57.752088] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:16:43.187 [2024-07-23 00:20:57.752100] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:43.187 [2024-07-23 00:20:57.752109] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:16:43.187 [2024-07-23 00:20:57.752120] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:16:43.187 [2024-07-23 00:20:57.752129] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:43.187 [2024-07-23 00:20:57.752140] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:16:43.187 [2024-07-23 00:20:57.752149] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:16:43.187 [2024-07-23 00:20:57.752161] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:43.187 [2024-07-23 00:20:57.752169] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:16:43.187 [2024-07-23 00:20:57.752180] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:16:43.187 [2024-07-23 00:20:57.752189] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:43.187 [2024-07-23 00:20:57.752202] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:16:43.187 [2024-07-23 00:20:57.752211] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:16:43.187 [2024-07-23 00:20:57.752223] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:43.187 [2024-07-23 00:20:57.752232] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:16:43.187 [2024-07-23 00:20:57.752243] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:16:43.187 [2024-07-23 00:20:57.752251] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:43.187 [2024-07-23 00:20:57.752272] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:16:43.187 [2024-07-23 00:20:57.752283] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:16:43.188 [2024-07-23 00:20:57.752296] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:43.188 [2024-07-23 00:20:57.752306] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:43.188 [2024-07-23 00:20:57.752317] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:16:43.188 [2024-07-23 00:20:57.752327] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:16:43.188 [2024-07-23 00:20:57.752338] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:16:43.188 [2024-07-23 00:20:57.752348] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:16:43.188 [2024-07-23 00:20:57.752359] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:16:43.188 [2024-07-23 00:20:57.752368] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:16:43.188 [2024-07-23 00:20:57.752384] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:16:43.188 [2024-07-23 00:20:57.752396] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:43.188 [2024-07-23 00:20:57.752410] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:16:43.188 [2024-07-23 00:20:57.752420] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:16:43.188 [2024-07-23 00:20:57.752433] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:16:43.188 [2024-07-23 00:20:57.752443] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:16:43.188 [2024-07-23 00:20:57.752455] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:16:43.188 [2024-07-23 00:20:57.752466] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:16:43.188 [2024-07-23 00:20:57.752478] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:16:43.188 [2024-07-23 00:20:57.752488] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:16:43.188 [2024-07-23 00:20:57.752500] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:16:43.188 [2024-07-23 00:20:57.752511] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:16:43.188 [2024-07-23 00:20:57.752524] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:16:43.188 [2024-07-23 00:20:57.752534] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:16:43.188 [2024-07-23 00:20:57.752546] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:16:43.188 [2024-07-23 00:20:57.752556] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:16:43.188 [2024-07-23 00:20:57.752570] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:16:43.188 [2024-07-23 00:20:57.752581] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:43.188 [2024-07-23 00:20:57.752597] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:16:43.188 [2024-07-23 00:20:57.752607] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:16:43.188 [2024-07-23 00:20:57.752620] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:16:43.188 [2024-07-23 00:20:57.752630] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:16:43.188 [2024-07-23 00:20:57.752643] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:43.188 [2024-07-23 00:20:57.752660] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:16:43.188 [2024-07-23 00:20:57.752675] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.887 ms 00:16:43.188 [2024-07-23 00:20:57.752691] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.188 [2024-07-23 00:20:57.764631] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:43.188 [2024-07-23 00:20:57.764667] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:43.188 [2024-07-23 00:20:57.764683] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.898 ms 00:16:43.188 [2024-07-23 00:20:57.764701] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.188 [2024-07-23 00:20:57.764818] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:43.188 [2024-07-23 00:20:57.764838] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:16:43.188 [2024-07-23 00:20:57.764854] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:16:43.188 [2024-07-23 00:20:57.764863] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.188 [2024-07-23 00:20:57.775785] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:43.188 [2024-07-23 00:20:57.775819] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:43.188 [2024-07-23 00:20:57.775835] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.915 ms 00:16:43.188 [2024-07-23 00:20:57.775861] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.188 [2024-07-23 00:20:57.775942] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:43.188 [2024-07-23 00:20:57.775953] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:43.188 [2024-07-23 00:20:57.775965] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:16:43.188 [2024-07-23 00:20:57.775975] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.188 [2024-07-23 00:20:57.776446] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:43.188 [2024-07-23 00:20:57.776461] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:43.188 [2024-07-23 00:20:57.776474] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.448 ms 00:16:43.188 [2024-07-23 00:20:57.776484] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.188 [2024-07-23 00:20:57.776601] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:43.188 [2024-07-23 00:20:57.776617] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:43.188 [2024-07-23 00:20:57.776632] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.091 ms 00:16:43.188 [2024-07-23 00:20:57.776650] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.188 [2024-07-23 00:20:57.783777] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:43.188 [2024-07-23 00:20:57.783810] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:43.188 [2024-07-23 00:20:57.783825] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.113 ms 00:16:43.188 [2024-07-23 00:20:57.783835] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.188 [2024-07-23 00:20:57.786431] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:16:43.188 [2024-07-23 00:20:57.786476] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:16:43.188 [2024-07-23 00:20:57.786492] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:43.188 [2024-07-23 00:20:57.786519] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:16:43.188 [2024-07-23 00:20:57.786532] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.559 ms 00:16:43.188 [2024-07-23 00:20:57.786541] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.188 [2024-07-23 00:20:57.798992] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:43.188 [2024-07-23 00:20:57.799051] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:16:43.188 [2024-07-23 00:20:57.799068] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.392 ms 00:16:43.188 [2024-07-23 00:20:57.799101] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.188 [2024-07-23 00:20:57.800811] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:43.188 [2024-07-23 00:20:57.800842] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:16:43.188 [2024-07-23 00:20:57.800856] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.627 ms 00:16:43.188 [2024-07-23 00:20:57.800866] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.188 [2024-07-23 00:20:57.802455] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:43.188 [2024-07-23 00:20:57.802486] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:16:43.188 [2024-07-23 00:20:57.802501] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.545 ms 00:16:43.188 [2024-07-23 00:20:57.802510] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.188 [2024-07-23 00:20:57.802793] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:43.188 [2024-07-23 00:20:57.802819] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:16:43.188 [2024-07-23 00:20:57.802833] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.210 ms 00:16:43.188 [2024-07-23 00:20:57.802843] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.188 [2024-07-23 00:20:57.834957] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:43.188 [2024-07-23 00:20:57.835024] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:16:43.188 [2024-07-23 00:20:57.835047] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.133 ms 00:16:43.188 [2024-07-23 00:20:57.835060] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.188 [2024-07-23 00:20:57.841360] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:16:43.188 [2024-07-23 00:20:57.856809] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:43.188 [2024-07-23 00:20:57.856851] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:16:43.188 [2024-07-23 00:20:57.856865] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.665 ms 00:16:43.188 [2024-07-23 00:20:57.856894] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.188 [2024-07-23 00:20:57.856980] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:43.188 [2024-07-23 00:20:57.856996] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:16:43.188 [2024-07-23 00:20:57.857008] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:16:43.188 [2024-07-23 00:20:57.857023] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.188 [2024-07-23 00:20:57.857084] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:43.188 [2024-07-23 00:20:57.857113] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:16:43.188 [2024-07-23 00:20:57.857124] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:16:43.189 [2024-07-23 00:20:57.857149] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.189 [2024-07-23 00:20:57.857175] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:43.189 [2024-07-23 00:20:57.857188] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:16:43.189 [2024-07-23 00:20:57.857198] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:16:43.189 [2024-07-23 00:20:57.857215] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.189 [2024-07-23 00:20:57.857250] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:16:43.189 [2024-07-23 00:20:57.857273] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:43.189 [2024-07-23 00:20:57.857303] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:16:43.189 [2024-07-23 00:20:57.857316] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:16:43.189 [2024-07-23 00:20:57.857325] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.189 [2024-07-23 00:20:57.861051] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:43.189 [2024-07-23 00:20:57.861095] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:16:43.189 [2024-07-23 00:20:57.861111] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.702 ms 00:16:43.189 [2024-07-23 00:20:57.861121] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.189 [2024-07-23 00:20:57.861214] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:43.189 [2024-07-23 00:20:57.861226] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:16:43.189 [2024-07-23 00:20:57.861248] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:16:43.189 [2024-07-23 00:20:57.861258] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.189 [2024-07-23 00:20:57.862249] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:16:43.189 [2024-07-23 00:20:57.863217] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 126.902 ms, result 0 00:16:43.189 [2024-07-23 00:20:57.864250] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:16:43.448 Some configs were skipped because the RPC state that can call them passed over. 00:16:43.448 00:20:57 ftl.ftl_trim -- ftl/trim.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:16:43.448 [2024-07-23 00:20:58.078749] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:43.448 [2024-07-23 00:20:58.078802] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:16:43.448 [2024-07-23 00:20:58.078825] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.468 ms 00:16:43.448 [2024-07-23 00:20:58.078838] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.448 [2024-07-23 00:20:58.078875] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.611 ms, result 0 00:16:43.448 true 00:16:43.448 00:20:58 ftl.ftl_trim -- ftl/trim.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:16:43.708 [2024-07-23 00:20:58.262543] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:43.708 [2024-07-23 00:20:58.262592] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:16:43.708 [2024-07-23 00:20:58.262610] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.307 ms 00:16:43.708 [2024-07-23 00:20:58.262620] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.708 [2024-07-23 00:20:58.262659] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.435 ms, result 0 00:16:43.708 true 00:16:43.708 00:20:58 ftl.ftl_trim -- ftl/trim.sh@81 -- # killprocess 89025 00:16:43.708 00:20:58 ftl.ftl_trim -- common/autotest_common.sh@946 -- # '[' -z 89025 ']' 00:16:43.708 00:20:58 ftl.ftl_trim -- common/autotest_common.sh@950 -- # kill -0 89025 00:16:43.708 00:20:58 ftl.ftl_trim -- common/autotest_common.sh@951 -- # uname 00:16:43.708 00:20:58 ftl.ftl_trim -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:16:43.708 00:20:58 ftl.ftl_trim -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 89025 00:16:43.708 killing process with pid 89025 00:16:43.708 00:20:58 ftl.ftl_trim -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:16:43.708 00:20:58 ftl.ftl_trim -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:16:43.708 00:20:58 ftl.ftl_trim -- common/autotest_common.sh@964 -- # echo 'killing process with pid 89025' 00:16:43.708 00:20:58 ftl.ftl_trim -- common/autotest_common.sh@965 -- # kill 89025 00:16:43.708 00:20:58 ftl.ftl_trim -- common/autotest_common.sh@970 -- # wait 89025 00:16:43.968 [2024-07-23 00:20:58.456697] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:43.968 [2024-07-23 00:20:58.456766] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:16:43.968 [2024-07-23 00:20:58.456782] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:16:43.968 [2024-07-23 00:20:58.456795] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.968 [2024-07-23 00:20:58.456825] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:16:43.968 [2024-07-23 00:20:58.457518] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:43.968 [2024-07-23 00:20:58.457537] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:16:43.968 [2024-07-23 00:20:58.457550] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.674 ms 00:16:43.968 [2024-07-23 00:20:58.457561] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.968 [2024-07-23 00:20:58.457810] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:43.968 [2024-07-23 00:20:58.457828] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:16:43.968 [2024-07-23 00:20:58.457840] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.225 ms 00:16:43.968 [2024-07-23 00:20:58.457850] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.968 [2024-07-23 00:20:58.462316] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:43.968 [2024-07-23 00:20:58.462356] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:16:43.968 [2024-07-23 00:20:58.462372] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.447 ms 00:16:43.968 [2024-07-23 00:20:58.462382] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.968 [2024-07-23 00:20:58.468103] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:43.968 [2024-07-23 00:20:58.468141] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:16:43.968 [2024-07-23 00:20:58.468155] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.682 ms 00:16:43.968 [2024-07-23 00:20:58.468164] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.968 [2024-07-23 00:20:58.469647] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:43.968 [2024-07-23 00:20:58.469684] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:16:43.968 [2024-07-23 00:20:58.469698] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.420 ms 00:16:43.968 [2024-07-23 00:20:58.469708] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.968 [2024-07-23 00:20:58.473433] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:43.968 [2024-07-23 00:20:58.473468] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:16:43.968 [2024-07-23 00:20:58.473483] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.692 ms 00:16:43.968 [2024-07-23 00:20:58.473492] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.968 [2024-07-23 00:20:58.473623] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:43.968 [2024-07-23 00:20:58.473635] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:16:43.968 [2024-07-23 00:20:58.473648] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.082 ms 00:16:43.968 [2024-07-23 00:20:58.473658] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.968 [2024-07-23 00:20:58.475610] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:43.968 [2024-07-23 00:20:58.475642] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:16:43.968 [2024-07-23 00:20:58.475656] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.930 ms 00:16:43.968 [2024-07-23 00:20:58.475666] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.968 [2024-07-23 00:20:58.477307] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:43.968 [2024-07-23 00:20:58.477339] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:16:43.968 [2024-07-23 00:20:58.477353] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.601 ms 00:16:43.968 [2024-07-23 00:20:58.477363] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.969 [2024-07-23 00:20:58.478540] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:43.969 [2024-07-23 00:20:58.478576] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:16:43.969 [2024-07-23 00:20:58.478590] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.140 ms 00:16:43.969 [2024-07-23 00:20:58.478599] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.969 [2024-07-23 00:20:58.479724] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:43.969 [2024-07-23 00:20:58.479758] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:16:43.969 [2024-07-23 00:20:58.479772] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.065 ms 00:16:43.969 [2024-07-23 00:20:58.479781] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.969 [2024-07-23 00:20:58.479815] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:16:43.969 [2024-07-23 00:20:58.479831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:16:43.969 [2024-07-23 00:20:58.479845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:16:43.969 [2024-07-23 00:20:58.479856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:16:43.969 [2024-07-23 00:20:58.479872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:16:43.969 [2024-07-23 00:20:58.479882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:16:43.969 [2024-07-23 00:20:58.479895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:16:43.969 [2024-07-23 00:20:58.479906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:16:43.969 [2024-07-23 00:20:58.479918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:16:43.969 [2024-07-23 00:20:58.479929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:16:43.969 [2024-07-23 00:20:58.479942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:16:43.969 [2024-07-23 00:20:58.479952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:16:43.969 [2024-07-23 00:20:58.479967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:16:43.969 [2024-07-23 00:20:58.479977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:16:43.969 [2024-07-23 00:20:58.479990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:16:43.969 [2024-07-23 00:20:58.480000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:16:43.969 [2024-07-23 00:20:58.480013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:16:43.969 [2024-07-23 00:20:58.480024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:16:43.969 [2024-07-23 00:20:58.480037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:16:43.969 [2024-07-23 00:20:58.480047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:16:43.969 [2024-07-23 00:20:58.480062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:16:43.969 [2024-07-23 00:20:58.480073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:16:43.969 [2024-07-23 00:20:58.480085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:16:43.969 [2024-07-23 00:20:58.480096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:16:43.969 [2024-07-23 00:20:58.480109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:16:43.969 [2024-07-23 00:20:58.480119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:16:43.969 [2024-07-23 00:20:58.480132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:16:43.969 [2024-07-23 00:20:58.480153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:16:43.969 [2024-07-23 00:20:58.480166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:16:43.969 [2024-07-23 00:20:58.480177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:16:43.969 [2024-07-23 00:20:58.480191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:16:43.969 [2024-07-23 00:20:58.480201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:16:43.969 [2024-07-23 00:20:58.480214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:16:43.969 [2024-07-23 00:20:58.480225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:16:43.969 [2024-07-23 00:20:58.480238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:16:43.969 [2024-07-23 00:20:58.480249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:16:43.969 [2024-07-23 00:20:58.480280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:16:43.969 [2024-07-23 00:20:58.480292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:16:43.969 [2024-07-23 00:20:58.480305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:16:43.969 [2024-07-23 00:20:58.480315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:16:43.969 [2024-07-23 00:20:58.480328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:16:43.969 [2024-07-23 00:20:58.480339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:16:43.969 [2024-07-23 00:20:58.480353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:16:43.969 [2024-07-23 00:20:58.480363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:16:43.969 [2024-07-23 00:20:58.480376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:16:43.969 [2024-07-23 00:20:58.480386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:16:43.969 [2024-07-23 00:20:58.480399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:16:43.969 [2024-07-23 00:20:58.480410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:16:43.969 [2024-07-23 00:20:58.480422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:16:43.969 [2024-07-23 00:20:58.480433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:16:43.969 [2024-07-23 00:20:58.480446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:16:43.969 [2024-07-23 00:20:58.480457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:16:43.969 [2024-07-23 00:20:58.480472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:16:43.969 [2024-07-23 00:20:58.480483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:16:43.969 [2024-07-23 00:20:58.480495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:16:43.969 [2024-07-23 00:20:58.480506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:16:43.969 [2024-07-23 00:20:58.480518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:16:43.969 [2024-07-23 00:20:58.480529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:16:43.969 [2024-07-23 00:20:58.480541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:16:43.969 [2024-07-23 00:20:58.480551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:16:43.969 [2024-07-23 00:20:58.480564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:16:43.969 [2024-07-23 00:20:58.480575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:16:43.969 [2024-07-23 00:20:58.480588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:16:43.969 [2024-07-23 00:20:58.480598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:16:43.969 [2024-07-23 00:20:58.480612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:16:43.969 [2024-07-23 00:20:58.480623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:16:43.969 [2024-07-23 00:20:58.480635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:16:43.969 [2024-07-23 00:20:58.480646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:16:43.969 [2024-07-23 00:20:58.480661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:16:43.969 [2024-07-23 00:20:58.480671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:16:43.969 [2024-07-23 00:20:58.480684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:16:43.969 [2024-07-23 00:20:58.480694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:16:43.969 [2024-07-23 00:20:58.480706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:16:43.969 [2024-07-23 00:20:58.480717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:16:43.969 [2024-07-23 00:20:58.480730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:16:43.969 [2024-07-23 00:20:58.480740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:16:43.969 [2024-07-23 00:20:58.480752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:16:43.969 [2024-07-23 00:20:58.480762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:16:43.969 [2024-07-23 00:20:58.480775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:16:43.969 [2024-07-23 00:20:58.480785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:16:43.969 [2024-07-23 00:20:58.480798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:16:43.970 [2024-07-23 00:20:58.480808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:16:43.970 [2024-07-23 00:20:58.480820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:16:43.970 [2024-07-23 00:20:58.480830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:16:43.970 [2024-07-23 00:20:58.480845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:16:43.970 [2024-07-23 00:20:58.480855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:16:43.970 [2024-07-23 00:20:58.480868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:16:43.970 [2024-07-23 00:20:58.480878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:16:43.970 [2024-07-23 00:20:58.480891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:16:43.970 [2024-07-23 00:20:58.480902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:16:43.970 [2024-07-23 00:20:58.480915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:16:43.970 [2024-07-23 00:20:58.480925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:16:43.970 [2024-07-23 00:20:58.480938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:16:43.970 [2024-07-23 00:20:58.480948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:16:43.970 [2024-07-23 00:20:58.480960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:16:43.970 [2024-07-23 00:20:58.480971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:16:43.970 [2024-07-23 00:20:58.480983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:16:43.970 [2024-07-23 00:20:58.480997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:16:43.970 [2024-07-23 00:20:58.481011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:16:43.970 [2024-07-23 00:20:58.481021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:16:43.970 [2024-07-23 00:20:58.481036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:16:43.970 [2024-07-23 00:20:58.481054] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:16:43.970 [2024-07-23 00:20:58.481075] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 05fcf3be-df9c-41ef-ae3d-41db770ccbb0 00:16:43.970 [2024-07-23 00:20:58.481086] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:16:43.970 [2024-07-23 00:20:58.481097] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:16:43.970 [2024-07-23 00:20:58.481109] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:16:43.970 [2024-07-23 00:20:58.481122] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:16:43.970 [2024-07-23 00:20:58.481131] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:16:43.970 [2024-07-23 00:20:58.481144] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:16:43.970 [2024-07-23 00:20:58.481153] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:16:43.970 [2024-07-23 00:20:58.481164] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:16:43.970 [2024-07-23 00:20:58.481173] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:16:43.970 [2024-07-23 00:20:58.481185] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:43.970 [2024-07-23 00:20:58.481195] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:16:43.970 [2024-07-23 00:20:58.481207] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.374 ms 00:16:43.970 [2024-07-23 00:20:58.481225] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.970 [2024-07-23 00:20:58.482940] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:43.970 [2024-07-23 00:20:58.482962] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:16:43.970 [2024-07-23 00:20:58.482976] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.678 ms 00:16:43.970 [2024-07-23 00:20:58.482992] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.970 [2024-07-23 00:20:58.483117] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:43.970 [2024-07-23 00:20:58.483128] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:16:43.970 [2024-07-23 00:20:58.483147] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.084 ms 00:16:43.970 [2024-07-23 00:20:58.483157] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.970 [2024-07-23 00:20:58.490455] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:43.970 [2024-07-23 00:20:58.490616] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:43.970 [2024-07-23 00:20:58.490734] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:43.970 [2024-07-23 00:20:58.490773] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.970 [2024-07-23 00:20:58.490872] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:43.970 [2024-07-23 00:20:58.490908] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:43.970 [2024-07-23 00:20:58.490942] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:43.970 [2024-07-23 00:20:58.491030] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.970 [2024-07-23 00:20:58.491123] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:43.970 [2024-07-23 00:20:58.491165] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:43.970 [2024-07-23 00:20:58.491199] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:43.970 [2024-07-23 00:20:58.491230] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.970 [2024-07-23 00:20:58.491354] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:43.970 [2024-07-23 00:20:58.491396] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:43.970 [2024-07-23 00:20:58.491432] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:43.970 [2024-07-23 00:20:58.491463] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.970 [2024-07-23 00:20:58.503582] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:43.970 [2024-07-23 00:20:58.503774] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:43.970 [2024-07-23 00:20:58.503900] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:43.970 [2024-07-23 00:20:58.503941] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.970 [2024-07-23 00:20:58.512418] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:43.970 [2024-07-23 00:20:58.512565] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:43.970 [2024-07-23 00:20:58.512647] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:43.970 [2024-07-23 00:20:58.512684] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.970 [2024-07-23 00:20:58.512771] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:43.970 [2024-07-23 00:20:58.512808] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:43.970 [2024-07-23 00:20:58.512846] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:43.970 [2024-07-23 00:20:58.512877] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.970 [2024-07-23 00:20:58.512985] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:43.970 [2024-07-23 00:20:58.513024] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:43.970 [2024-07-23 00:20:58.513059] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:43.970 [2024-07-23 00:20:58.513113] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.970 [2024-07-23 00:20:58.513230] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:43.970 [2024-07-23 00:20:58.513585] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:43.970 [2024-07-23 00:20:58.513609] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:43.970 [2024-07-23 00:20:58.513622] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.970 [2024-07-23 00:20:58.513687] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:43.970 [2024-07-23 00:20:58.513700] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:16:43.970 [2024-07-23 00:20:58.513713] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:43.970 [2024-07-23 00:20:58.513723] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.970 [2024-07-23 00:20:58.513773] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:43.970 [2024-07-23 00:20:58.513785] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:43.970 [2024-07-23 00:20:58.513798] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:43.970 [2024-07-23 00:20:58.513811] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.970 [2024-07-23 00:20:58.513861] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:43.970 [2024-07-23 00:20:58.513874] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:43.970 [2024-07-23 00:20:58.513887] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:43.970 [2024-07-23 00:20:58.513897] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.970 [2024-07-23 00:20:58.514044] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 57.404 ms, result 0 00:16:44.229 00:20:58 ftl.ftl_trim -- ftl/trim.sh@84 -- # file=/home/vagrant/spdk_repo/spdk/test/ftl/data 00:16:44.229 00:20:58 ftl.ftl_trim -- ftl/trim.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:16:44.229 [2024-07-23 00:20:58.848613] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:16:44.229 [2024-07-23 00:20:58.848764] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89061 ] 00:16:44.488 [2024-07-23 00:20:58.997690] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:44.488 [2024-07-23 00:20:59.039929] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:44.488 [2024-07-23 00:20:59.141849] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:16:44.488 [2024-07-23 00:20:59.141925] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:16:44.748 [2024-07-23 00:20:59.293471] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:44.748 [2024-07-23 00:20:59.293520] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:16:44.748 [2024-07-23 00:20:59.293535] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:16:44.749 [2024-07-23 00:20:59.293545] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.749 [2024-07-23 00:20:59.295908] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:44.749 [2024-07-23 00:20:59.295946] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:44.749 [2024-07-23 00:20:59.295966] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.338 ms 00:16:44.749 [2024-07-23 00:20:59.295976] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.749 [2024-07-23 00:20:59.296061] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:16:44.749 [2024-07-23 00:20:59.296303] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:16:44.749 [2024-07-23 00:20:59.296323] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:44.749 [2024-07-23 00:20:59.296333] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:44.749 [2024-07-23 00:20:59.296348] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.277 ms 00:16:44.749 [2024-07-23 00:20:59.296364] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.749 [2024-07-23 00:20:59.297833] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:16:44.749 [2024-07-23 00:20:59.300273] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:44.749 [2024-07-23 00:20:59.300305] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:16:44.749 [2024-07-23 00:20:59.300318] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.445 ms 00:16:44.749 [2024-07-23 00:20:59.300329] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.749 [2024-07-23 00:20:59.300394] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:44.749 [2024-07-23 00:20:59.300407] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:16:44.749 [2024-07-23 00:20:59.300427] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:16:44.749 [2024-07-23 00:20:59.300447] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.749 [2024-07-23 00:20:59.307121] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:44.749 [2024-07-23 00:20:59.307147] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:44.749 [2024-07-23 00:20:59.307158] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.641 ms 00:16:44.749 [2024-07-23 00:20:59.307168] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.749 [2024-07-23 00:20:59.307326] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:44.749 [2024-07-23 00:20:59.307342] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:44.749 [2024-07-23 00:20:59.307352] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.088 ms 00:16:44.749 [2024-07-23 00:20:59.307365] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.749 [2024-07-23 00:20:59.307397] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:44.749 [2024-07-23 00:20:59.307411] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:16:44.749 [2024-07-23 00:20:59.307421] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:16:44.749 [2024-07-23 00:20:59.307430] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.749 [2024-07-23 00:20:59.307469] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:16:44.749 [2024-07-23 00:20:59.309083] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:44.749 [2024-07-23 00:20:59.309111] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:44.749 [2024-07-23 00:20:59.309126] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.623 ms 00:16:44.749 [2024-07-23 00:20:59.309136] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.749 [2024-07-23 00:20:59.309183] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:44.749 [2024-07-23 00:20:59.309194] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:16:44.749 [2024-07-23 00:20:59.309204] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:16:44.749 [2024-07-23 00:20:59.309214] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.749 [2024-07-23 00:20:59.309234] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:16:44.749 [2024-07-23 00:20:59.309256] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:16:44.749 [2024-07-23 00:20:59.309331] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:16:44.749 [2024-07-23 00:20:59.309353] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:16:44.749 [2024-07-23 00:20:59.309435] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:16:44.749 [2024-07-23 00:20:59.309448] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:16:44.749 [2024-07-23 00:20:59.309468] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:16:44.749 [2024-07-23 00:20:59.309481] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:16:44.749 [2024-07-23 00:20:59.309493] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:16:44.749 [2024-07-23 00:20:59.309511] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:16:44.749 [2024-07-23 00:20:59.309528] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:16:44.749 [2024-07-23 00:20:59.309537] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:16:44.749 [2024-07-23 00:20:59.309551] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:16:44.749 [2024-07-23 00:20:59.309561] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:44.749 [2024-07-23 00:20:59.309578] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:16:44.749 [2024-07-23 00:20:59.309589] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.330 ms 00:16:44.749 [2024-07-23 00:20:59.309598] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.749 [2024-07-23 00:20:59.309677] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:44.749 [2024-07-23 00:20:59.309688] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:16:44.749 [2024-07-23 00:20:59.309698] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:16:44.749 [2024-07-23 00:20:59.309707] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.749 [2024-07-23 00:20:59.309799] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:16:44.749 [2024-07-23 00:20:59.309818] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:16:44.749 [2024-07-23 00:20:59.309829] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:44.749 [2024-07-23 00:20:59.309839] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:44.749 [2024-07-23 00:20:59.309849] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:16:44.749 [2024-07-23 00:20:59.309858] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:16:44.749 [2024-07-23 00:20:59.309868] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:16:44.749 [2024-07-23 00:20:59.309878] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:16:44.749 [2024-07-23 00:20:59.309887] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:16:44.749 [2024-07-23 00:20:59.309896] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:44.749 [2024-07-23 00:20:59.309905] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:16:44.749 [2024-07-23 00:20:59.309914] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:16:44.749 [2024-07-23 00:20:59.309926] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:44.749 [2024-07-23 00:20:59.309936] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:16:44.749 [2024-07-23 00:20:59.309948] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:16:44.749 [2024-07-23 00:20:59.309957] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:44.749 [2024-07-23 00:20:59.309966] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:16:44.749 [2024-07-23 00:20:59.309975] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:16:44.749 [2024-07-23 00:20:59.309985] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:44.749 [2024-07-23 00:20:59.309994] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:16:44.749 [2024-07-23 00:20:59.310003] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:16:44.749 [2024-07-23 00:20:59.310012] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:44.749 [2024-07-23 00:20:59.310021] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:16:44.749 [2024-07-23 00:20:59.310030] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:16:44.749 [2024-07-23 00:20:59.310039] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:44.749 [2024-07-23 00:20:59.310048] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:16:44.749 [2024-07-23 00:20:59.310057] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:16:44.749 [2024-07-23 00:20:59.310066] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:44.749 [2024-07-23 00:20:59.310080] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:16:44.749 [2024-07-23 00:20:59.310089] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:16:44.749 [2024-07-23 00:20:59.310098] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:44.749 [2024-07-23 00:20:59.310106] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:16:44.749 [2024-07-23 00:20:59.310115] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:16:44.749 [2024-07-23 00:20:59.310124] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:44.749 [2024-07-23 00:20:59.310133] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:16:44.749 [2024-07-23 00:20:59.310142] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:16:44.749 [2024-07-23 00:20:59.310151] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:44.749 [2024-07-23 00:20:59.310160] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:16:44.749 [2024-07-23 00:20:59.310169] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:16:44.749 [2024-07-23 00:20:59.310177] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:44.749 [2024-07-23 00:20:59.310186] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:16:44.749 [2024-07-23 00:20:59.310195] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:16:44.750 [2024-07-23 00:20:59.310204] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:44.750 [2024-07-23 00:20:59.310213] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:16:44.750 [2024-07-23 00:20:59.310232] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:16:44.750 [2024-07-23 00:20:59.310242] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:44.750 [2024-07-23 00:20:59.310253] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:44.750 [2024-07-23 00:20:59.310492] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:16:44.750 [2024-07-23 00:20:59.310539] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:16:44.750 [2024-07-23 00:20:59.310569] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:16:44.750 [2024-07-23 00:20:59.310598] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:16:44.750 [2024-07-23 00:20:59.310627] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:16:44.750 [2024-07-23 00:20:59.310656] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:16:44.750 [2024-07-23 00:20:59.310687] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:16:44.750 [2024-07-23 00:20:59.310736] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:44.750 [2024-07-23 00:20:59.310966] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:16:44.750 [2024-07-23 00:20:59.311065] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:16:44.750 [2024-07-23 00:20:59.311224] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:16:44.750 [2024-07-23 00:20:59.311282] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:16:44.750 [2024-07-23 00:20:59.311329] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:16:44.750 [2024-07-23 00:20:59.311379] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:16:44.750 [2024-07-23 00:20:59.311425] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:16:44.750 [2024-07-23 00:20:59.311471] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:16:44.750 [2024-07-23 00:20:59.311586] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:16:44.750 [2024-07-23 00:20:59.311635] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:16:44.750 [2024-07-23 00:20:59.311681] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:16:44.750 [2024-07-23 00:20:59.311727] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:16:44.750 [2024-07-23 00:20:59.311773] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:16:44.750 [2024-07-23 00:20:59.311870] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:16:44.750 [2024-07-23 00:20:59.311881] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:16:44.750 [2024-07-23 00:20:59.311898] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:44.750 [2024-07-23 00:20:59.311919] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:16:44.750 [2024-07-23 00:20:59.311929] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:16:44.750 [2024-07-23 00:20:59.311939] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:16:44.750 [2024-07-23 00:20:59.311950] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:16:44.750 [2024-07-23 00:20:59.311962] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:44.750 [2024-07-23 00:20:59.311976] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:16:44.750 [2024-07-23 00:20:59.311987] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.218 ms 00:16:44.750 [2024-07-23 00:20:59.311998] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.750 [2024-07-23 00:20:59.331658] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:44.750 [2024-07-23 00:20:59.331692] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:44.750 [2024-07-23 00:20:59.331706] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.613 ms 00:16:44.750 [2024-07-23 00:20:59.331720] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.750 [2024-07-23 00:20:59.331836] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:44.750 [2024-07-23 00:20:59.331850] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:16:44.750 [2024-07-23 00:20:59.331860] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:16:44.750 [2024-07-23 00:20:59.331870] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.750 [2024-07-23 00:20:59.342313] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:44.750 [2024-07-23 00:20:59.342357] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:44.750 [2024-07-23 00:20:59.342371] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.435 ms 00:16:44.750 [2024-07-23 00:20:59.342386] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.750 [2024-07-23 00:20:59.342455] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:44.750 [2024-07-23 00:20:59.342469] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:44.750 [2024-07-23 00:20:59.342481] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:16:44.750 [2024-07-23 00:20:59.342491] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.750 [2024-07-23 00:20:59.342934] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:44.750 [2024-07-23 00:20:59.342959] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:44.750 [2024-07-23 00:20:59.342971] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.423 ms 00:16:44.750 [2024-07-23 00:20:59.342982] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.750 [2024-07-23 00:20:59.343110] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:44.750 [2024-07-23 00:20:59.343124] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:44.750 [2024-07-23 00:20:59.343135] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.102 ms 00:16:44.750 [2024-07-23 00:20:59.343146] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.750 [2024-07-23 00:20:59.349555] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:44.750 [2024-07-23 00:20:59.349587] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:44.750 [2024-07-23 00:20:59.349600] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.387 ms 00:16:44.750 [2024-07-23 00:20:59.349610] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.750 [2024-07-23 00:20:59.352242] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:16:44.750 [2024-07-23 00:20:59.352289] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:16:44.750 [2024-07-23 00:20:59.352305] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:44.750 [2024-07-23 00:20:59.352319] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:16:44.750 [2024-07-23 00:20:59.352330] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.595 ms 00:16:44.750 [2024-07-23 00:20:59.352340] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.750 [2024-07-23 00:20:59.365092] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:44.750 [2024-07-23 00:20:59.365148] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:16:44.750 [2024-07-23 00:20:59.365162] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.723 ms 00:16:44.750 [2024-07-23 00:20:59.365186] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.750 [2024-07-23 00:20:59.366983] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:44.750 [2024-07-23 00:20:59.367017] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:16:44.750 [2024-07-23 00:20:59.367029] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.696 ms 00:16:44.750 [2024-07-23 00:20:59.367039] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.750 [2024-07-23 00:20:59.368584] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:44.750 [2024-07-23 00:20:59.368605] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:16:44.750 [2024-07-23 00:20:59.368615] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.504 ms 00:16:44.750 [2024-07-23 00:20:59.368625] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.750 [2024-07-23 00:20:59.368911] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:44.750 [2024-07-23 00:20:59.368932] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:16:44.750 [2024-07-23 00:20:59.368944] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.216 ms 00:16:44.750 [2024-07-23 00:20:59.368954] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.750 [2024-07-23 00:20:59.389739] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:44.750 [2024-07-23 00:20:59.389791] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:16:44.750 [2024-07-23 00:20:59.389817] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.792 ms 00:16:44.750 [2024-07-23 00:20:59.389828] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.750 [2024-07-23 00:20:59.396075] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:16:44.750 [2024-07-23 00:20:59.412315] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:44.750 [2024-07-23 00:20:59.412359] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:16:44.750 [2024-07-23 00:20:59.412373] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.449 ms 00:16:44.750 [2024-07-23 00:20:59.412395] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.750 [2024-07-23 00:20:59.412496] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:44.751 [2024-07-23 00:20:59.412509] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:16:44.751 [2024-07-23 00:20:59.412524] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:16:44.751 [2024-07-23 00:20:59.412534] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.751 [2024-07-23 00:20:59.412588] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:44.751 [2024-07-23 00:20:59.412600] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:16:44.751 [2024-07-23 00:20:59.412610] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:16:44.751 [2024-07-23 00:20:59.412620] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.751 [2024-07-23 00:20:59.412644] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:44.751 [2024-07-23 00:20:59.412655] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:16:44.751 [2024-07-23 00:20:59.412677] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:16:44.751 [2024-07-23 00:20:59.412697] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.751 [2024-07-23 00:20:59.412732] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:16:44.751 [2024-07-23 00:20:59.412744] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:44.751 [2024-07-23 00:20:59.412755] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:16:44.751 [2024-07-23 00:20:59.412771] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:16:44.751 [2024-07-23 00:20:59.412781] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.751 [2024-07-23 00:20:59.416613] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:44.751 [2024-07-23 00:20:59.416650] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:16:44.751 [2024-07-23 00:20:59.416663] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.817 ms 00:16:44.751 [2024-07-23 00:20:59.416680] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.751 [2024-07-23 00:20:59.416766] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:44.751 [2024-07-23 00:20:59.416779] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:16:44.751 [2024-07-23 00:20:59.416790] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:16:44.751 [2024-07-23 00:20:59.416800] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.751 [2024-07-23 00:20:59.417773] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:16:44.751 [2024-07-23 00:20:59.418738] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 124.229 ms, result 0 00:16:44.751 [2024-07-23 00:20:59.419473] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:16:44.751 [2024-07-23 00:20:59.429277] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:16:54.687  Copying: 29/256 [MB] (29 MBps) Copying: 55/256 [MB] (25 MBps) Copying: 81/256 [MB] (25 MBps) Copying: 108/256 [MB] (26 MBps) Copying: 135/256 [MB] (27 MBps) Copying: 161/256 [MB] (26 MBps) Copying: 187/256 [MB] (25 MBps) Copying: 213/256 [MB] (26 MBps) Copying: 238/256 [MB] (25 MBps) Copying: 256/256 [MB] (average 26 MBps)[2024-07-23 00:21:09.096791] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:16:54.687 [2024-07-23 00:21:09.098146] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:54.687 [2024-07-23 00:21:09.098183] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:16:54.687 [2024-07-23 00:21:09.098199] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:16:54.687 [2024-07-23 00:21:09.098209] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:54.687 [2024-07-23 00:21:09.098230] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:16:54.687 [2024-07-23 00:21:09.098897] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:54.687 [2024-07-23 00:21:09.098919] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:16:54.687 [2024-07-23 00:21:09.098931] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.653 ms 00:16:54.687 [2024-07-23 00:21:09.098950] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:54.687 [2024-07-23 00:21:09.099158] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:54.687 [2024-07-23 00:21:09.099170] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:16:54.687 [2024-07-23 00:21:09.099185] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.190 ms 00:16:54.687 [2024-07-23 00:21:09.099195] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:54.687 [2024-07-23 00:21:09.102071] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:54.687 [2024-07-23 00:21:09.102096] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:16:54.687 [2024-07-23 00:21:09.102116] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.865 ms 00:16:54.687 [2024-07-23 00:21:09.102133] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:54.687 [2024-07-23 00:21:09.107778] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:54.687 [2024-07-23 00:21:09.107809] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:16:54.687 [2024-07-23 00:21:09.107820] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.635 ms 00:16:54.687 [2024-07-23 00:21:09.107844] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:54.687 [2024-07-23 00:21:09.109303] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:54.687 [2024-07-23 00:21:09.109337] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:16:54.687 [2024-07-23 00:21:09.109350] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.426 ms 00:16:54.687 [2024-07-23 00:21:09.109360] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:54.687 [2024-07-23 00:21:09.113164] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:54.687 [2024-07-23 00:21:09.113202] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:16:54.687 [2024-07-23 00:21:09.113214] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.779 ms 00:16:54.687 [2024-07-23 00:21:09.113224] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:54.687 [2024-07-23 00:21:09.113347] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:54.687 [2024-07-23 00:21:09.113360] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:16:54.687 [2024-07-23 00:21:09.113375] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.092 ms 00:16:54.687 [2024-07-23 00:21:09.113385] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:54.687 [2024-07-23 00:21:09.115457] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:54.687 [2024-07-23 00:21:09.115491] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:16:54.687 [2024-07-23 00:21:09.115502] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.058 ms 00:16:54.687 [2024-07-23 00:21:09.115512] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:54.687 [2024-07-23 00:21:09.116959] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:54.687 [2024-07-23 00:21:09.116993] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:16:54.687 [2024-07-23 00:21:09.117004] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.419 ms 00:16:54.687 [2024-07-23 00:21:09.117013] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:54.687 [2024-07-23 00:21:09.118332] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:54.687 [2024-07-23 00:21:09.118364] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:16:54.687 [2024-07-23 00:21:09.118376] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.291 ms 00:16:54.687 [2024-07-23 00:21:09.118385] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:54.687 [2024-07-23 00:21:09.119447] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:54.687 [2024-07-23 00:21:09.119481] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:16:54.687 [2024-07-23 00:21:09.119492] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.010 ms 00:16:54.687 [2024-07-23 00:21:09.119501] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:54.687 [2024-07-23 00:21:09.119530] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:16:54.687 [2024-07-23 00:21:09.119546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:16:54.687 [2024-07-23 00:21:09.119558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:16:54.688 [2024-07-23 00:21:09.119569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:16:54.688 [2024-07-23 00:21:09.119580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:16:54.688 [2024-07-23 00:21:09.119590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:16:54.688 [2024-07-23 00:21:09.119600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:16:54.688 [2024-07-23 00:21:09.119610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:16:54.688 [2024-07-23 00:21:09.119621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:16:54.688 [2024-07-23 00:21:09.119631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:16:54.688 [2024-07-23 00:21:09.119641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:16:54.688 [2024-07-23 00:21:09.119651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:16:54.688 [2024-07-23 00:21:09.119662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:16:54.688 [2024-07-23 00:21:09.119672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:16:54.688 [2024-07-23 00:21:09.119682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:16:54.688 [2024-07-23 00:21:09.119693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:16:54.688 [2024-07-23 00:21:09.119703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:16:54.688 [2024-07-23 00:21:09.119714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:16:54.688 [2024-07-23 00:21:09.119724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:16:54.688 [2024-07-23 00:21:09.119734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:16:54.688 [2024-07-23 00:21:09.119744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:16:54.688 [2024-07-23 00:21:09.119754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:16:54.688 [2024-07-23 00:21:09.119764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:16:54.688 [2024-07-23 00:21:09.119775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:16:54.688 [2024-07-23 00:21:09.119785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:16:54.688 [2024-07-23 00:21:09.119795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:16:54.688 [2024-07-23 00:21:09.119805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:16:54.688 [2024-07-23 00:21:09.119818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:16:54.688 [2024-07-23 00:21:09.119828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:16:54.688 [2024-07-23 00:21:09.119838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:16:54.688 [2024-07-23 00:21:09.119849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:16:54.688 [2024-07-23 00:21:09.119859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:16:54.688 [2024-07-23 00:21:09.119871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:16:54.688 [2024-07-23 00:21:09.119881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:16:54.688 [2024-07-23 00:21:09.119891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:16:54.688 [2024-07-23 00:21:09.119902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:16:54.688 [2024-07-23 00:21:09.119912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:16:54.688 [2024-07-23 00:21:09.119922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:16:54.688 [2024-07-23 00:21:09.119933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:16:54.688 [2024-07-23 00:21:09.119944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:16:54.688 [2024-07-23 00:21:09.119954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:16:54.688 [2024-07-23 00:21:09.119965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:16:54.688 [2024-07-23 00:21:09.119975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:16:54.688 [2024-07-23 00:21:09.119998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:16:54.688 [2024-07-23 00:21:09.120009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:16:54.688 [2024-07-23 00:21:09.120019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:16:54.688 [2024-07-23 00:21:09.120030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:16:54.688 [2024-07-23 00:21:09.120040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:16:54.688 [2024-07-23 00:21:09.120051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:16:54.688 [2024-07-23 00:21:09.120062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:16:54.688 [2024-07-23 00:21:09.120072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:16:54.688 [2024-07-23 00:21:09.120083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:16:54.688 [2024-07-23 00:21:09.120094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:16:54.688 [2024-07-23 00:21:09.120104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:16:54.688 [2024-07-23 00:21:09.120115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:16:54.688 [2024-07-23 00:21:09.120125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:16:54.688 [2024-07-23 00:21:09.120136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:16:54.688 [2024-07-23 00:21:09.120146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:16:54.688 [2024-07-23 00:21:09.120156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:16:54.688 [2024-07-23 00:21:09.120167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:16:54.688 [2024-07-23 00:21:09.120177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:16:54.688 [2024-07-23 00:21:09.120187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:16:54.688 [2024-07-23 00:21:09.120197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:16:54.688 [2024-07-23 00:21:09.120207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:16:54.688 [2024-07-23 00:21:09.120220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:16:54.688 [2024-07-23 00:21:09.120231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:16:54.688 [2024-07-23 00:21:09.120241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:16:54.688 [2024-07-23 00:21:09.120252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:16:54.688 [2024-07-23 00:21:09.120276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:16:54.688 [2024-07-23 00:21:09.120288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:16:54.688 [2024-07-23 00:21:09.120298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:16:54.688 [2024-07-23 00:21:09.120309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:16:54.688 [2024-07-23 00:21:09.120319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:16:54.688 [2024-07-23 00:21:09.120329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:16:54.688 [2024-07-23 00:21:09.120339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:16:54.688 [2024-07-23 00:21:09.120350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:16:54.688 [2024-07-23 00:21:09.120360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:16:54.688 [2024-07-23 00:21:09.120371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:16:54.688 [2024-07-23 00:21:09.120381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:16:54.688 [2024-07-23 00:21:09.120392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:16:54.688 [2024-07-23 00:21:09.120402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:16:54.688 [2024-07-23 00:21:09.120413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:16:54.688 [2024-07-23 00:21:09.120423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:16:54.688 [2024-07-23 00:21:09.120433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:16:54.688 [2024-07-23 00:21:09.120443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:16:54.688 [2024-07-23 00:21:09.120454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:16:54.688 [2024-07-23 00:21:09.120464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:16:54.688 [2024-07-23 00:21:09.120474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:16:54.688 [2024-07-23 00:21:09.120484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:16:54.688 [2024-07-23 00:21:09.120494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:16:54.688 [2024-07-23 00:21:09.120505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:16:54.688 [2024-07-23 00:21:09.120516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:16:54.689 [2024-07-23 00:21:09.120537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:16:54.689 [2024-07-23 00:21:09.120547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:16:54.689 [2024-07-23 00:21:09.120558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:16:54.689 [2024-07-23 00:21:09.120568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:16:54.689 [2024-07-23 00:21:09.120579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:16:54.689 [2024-07-23 00:21:09.120589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:16:54.689 [2024-07-23 00:21:09.120600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:16:54.689 [2024-07-23 00:21:09.120610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:16:54.689 [2024-07-23 00:21:09.120621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:16:54.689 [2024-07-23 00:21:09.120639] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:16:54.689 [2024-07-23 00:21:09.120649] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 05fcf3be-df9c-41ef-ae3d-41db770ccbb0 00:16:54.689 [2024-07-23 00:21:09.120660] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:16:54.689 [2024-07-23 00:21:09.120669] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:16:54.689 [2024-07-23 00:21:09.120678] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:16:54.689 [2024-07-23 00:21:09.120689] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:16:54.689 [2024-07-23 00:21:09.120706] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:16:54.689 [2024-07-23 00:21:09.120720] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:16:54.689 [2024-07-23 00:21:09.120729] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:16:54.689 [2024-07-23 00:21:09.120738] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:16:54.689 [2024-07-23 00:21:09.120747] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:16:54.689 [2024-07-23 00:21:09.120756] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:54.689 [2024-07-23 00:21:09.120766] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:16:54.689 [2024-07-23 00:21:09.120776] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.229 ms 00:16:54.689 [2024-07-23 00:21:09.120789] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:54.689 [2024-07-23 00:21:09.122528] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:54.689 [2024-07-23 00:21:09.122550] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:16:54.689 [2024-07-23 00:21:09.122560] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.723 ms 00:16:54.689 [2024-07-23 00:21:09.122574] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:54.689 [2024-07-23 00:21:09.122678] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:54.689 [2024-07-23 00:21:09.122689] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:16:54.689 [2024-07-23 00:21:09.122699] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.084 ms 00:16:54.689 [2024-07-23 00:21:09.122709] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:54.689 [2024-07-23 00:21:09.128974] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:54.689 [2024-07-23 00:21:09.128998] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:54.689 [2024-07-23 00:21:09.129013] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:54.689 [2024-07-23 00:21:09.129023] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:54.689 [2024-07-23 00:21:09.129098] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:54.689 [2024-07-23 00:21:09.129109] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:54.689 [2024-07-23 00:21:09.129120] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:54.689 [2024-07-23 00:21:09.129130] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:54.689 [2024-07-23 00:21:09.129171] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:54.689 [2024-07-23 00:21:09.129184] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:54.689 [2024-07-23 00:21:09.129194] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:54.689 [2024-07-23 00:21:09.129203] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:54.689 [2024-07-23 00:21:09.129232] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:54.689 [2024-07-23 00:21:09.129245] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:54.689 [2024-07-23 00:21:09.129255] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:54.689 [2024-07-23 00:21:09.129281] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:54.689 [2024-07-23 00:21:09.141664] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:54.689 [2024-07-23 00:21:09.141713] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:54.689 [2024-07-23 00:21:09.141726] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:54.689 [2024-07-23 00:21:09.141745] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:54.689 [2024-07-23 00:21:09.150043] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:54.689 [2024-07-23 00:21:09.150084] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:54.689 [2024-07-23 00:21:09.150108] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:54.689 [2024-07-23 00:21:09.150119] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:54.689 [2024-07-23 00:21:09.150147] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:54.689 [2024-07-23 00:21:09.150165] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:54.689 [2024-07-23 00:21:09.150176] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:54.689 [2024-07-23 00:21:09.150185] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:54.689 [2024-07-23 00:21:09.150219] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:54.689 [2024-07-23 00:21:09.150230] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:54.689 [2024-07-23 00:21:09.150240] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:54.689 [2024-07-23 00:21:09.150250] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:54.689 [2024-07-23 00:21:09.150590] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:54.689 [2024-07-23 00:21:09.150631] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:54.689 [2024-07-23 00:21:09.150662] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:54.689 [2024-07-23 00:21:09.150692] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:54.689 [2024-07-23 00:21:09.150754] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:54.689 [2024-07-23 00:21:09.150795] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:16:54.689 [2024-07-23 00:21:09.150825] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:54.689 [2024-07-23 00:21:09.150856] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:54.689 [2024-07-23 00:21:09.150908] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:54.689 [2024-07-23 00:21:09.150919] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:54.689 [2024-07-23 00:21:09.150929] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:54.689 [2024-07-23 00:21:09.150939] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:54.689 [2024-07-23 00:21:09.150995] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:54.689 [2024-07-23 00:21:09.151008] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:54.689 [2024-07-23 00:21:09.151025] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:54.689 [2024-07-23 00:21:09.151044] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:54.689 [2024-07-23 00:21:09.151195] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 53.113 ms, result 0 00:16:54.948 00:16:54.948 00:16:54.948 00:21:09 ftl.ftl_trim -- ftl/trim.sh@86 -- # cmp --bytes=4194304 /home/vagrant/spdk_repo/spdk/test/ftl/data /dev/zero 00:16:54.948 00:21:09 ftl.ftl_trim -- ftl/trim.sh@87 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/data 00:16:55.207 00:21:09 ftl.ftl_trim -- ftl/trim.sh@90 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --count=1024 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:16:55.466 [2024-07-23 00:21:09.901204] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:16:55.466 [2024-07-23 00:21:09.901476] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89177 ] 00:16:55.466 [2024-07-23 00:21:10.051660] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:55.466 [2024-07-23 00:21:10.095951] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:55.726 [2024-07-23 00:21:10.197883] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:16:55.726 [2024-07-23 00:21:10.197954] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:16:55.726 [2024-07-23 00:21:10.348666] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:55.726 [2024-07-23 00:21:10.348715] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:16:55.726 [2024-07-23 00:21:10.348751] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:16:55.726 [2024-07-23 00:21:10.348762] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:55.726 [2024-07-23 00:21:10.351171] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:55.726 [2024-07-23 00:21:10.351210] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:55.726 [2024-07-23 00:21:10.351222] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.392 ms 00:16:55.726 [2024-07-23 00:21:10.351232] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:55.726 [2024-07-23 00:21:10.351329] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:16:55.726 [2024-07-23 00:21:10.351543] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:16:55.726 [2024-07-23 00:21:10.351561] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:55.726 [2024-07-23 00:21:10.351571] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:55.726 [2024-07-23 00:21:10.351585] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.240 ms 00:16:55.726 [2024-07-23 00:21:10.351594] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:55.726 [2024-07-23 00:21:10.353080] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:16:55.726 [2024-07-23 00:21:10.355564] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:55.726 [2024-07-23 00:21:10.355599] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:16:55.726 [2024-07-23 00:21:10.355612] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.488 ms 00:16:55.726 [2024-07-23 00:21:10.355631] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:55.726 [2024-07-23 00:21:10.355697] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:55.726 [2024-07-23 00:21:10.355710] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:16:55.726 [2024-07-23 00:21:10.355721] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:16:55.726 [2024-07-23 00:21:10.355733] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:55.726 [2024-07-23 00:21:10.362571] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:55.726 [2024-07-23 00:21:10.362599] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:55.726 [2024-07-23 00:21:10.362610] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.799 ms 00:16:55.726 [2024-07-23 00:21:10.362619] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:55.726 [2024-07-23 00:21:10.362754] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:55.726 [2024-07-23 00:21:10.362784] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:55.726 [2024-07-23 00:21:10.362795] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:16:55.726 [2024-07-23 00:21:10.362808] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:55.726 [2024-07-23 00:21:10.362840] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:55.726 [2024-07-23 00:21:10.362855] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:16:55.726 [2024-07-23 00:21:10.362865] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:16:55.726 [2024-07-23 00:21:10.362874] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:55.726 [2024-07-23 00:21:10.362898] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:16:55.726 [2024-07-23 00:21:10.364550] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:55.726 [2024-07-23 00:21:10.364577] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:55.726 [2024-07-23 00:21:10.364592] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.661 ms 00:16:55.726 [2024-07-23 00:21:10.364602] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:55.726 [2024-07-23 00:21:10.364648] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:55.726 [2024-07-23 00:21:10.364659] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:16:55.726 [2024-07-23 00:21:10.364669] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:16:55.726 [2024-07-23 00:21:10.364679] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:55.726 [2024-07-23 00:21:10.364707] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:16:55.726 [2024-07-23 00:21:10.364729] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:16:55.726 [2024-07-23 00:21:10.364772] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:16:55.727 [2024-07-23 00:21:10.364795] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:16:55.727 [2024-07-23 00:21:10.364877] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:16:55.727 [2024-07-23 00:21:10.364891] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:16:55.727 [2024-07-23 00:21:10.364904] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:16:55.727 [2024-07-23 00:21:10.364916] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:16:55.727 [2024-07-23 00:21:10.364927] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:16:55.727 [2024-07-23 00:21:10.364939] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:16:55.727 [2024-07-23 00:21:10.364949] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:16:55.727 [2024-07-23 00:21:10.364958] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:16:55.727 [2024-07-23 00:21:10.364971] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:16:55.727 [2024-07-23 00:21:10.364981] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:55.727 [2024-07-23 00:21:10.364992] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:16:55.727 [2024-07-23 00:21:10.365002] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.277 ms 00:16:55.727 [2024-07-23 00:21:10.365012] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:55.727 [2024-07-23 00:21:10.365094] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:55.727 [2024-07-23 00:21:10.365105] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:16:55.727 [2024-07-23 00:21:10.365116] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:16:55.727 [2024-07-23 00:21:10.365125] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:55.727 [2024-07-23 00:21:10.365210] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:16:55.727 [2024-07-23 00:21:10.365223] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:16:55.727 [2024-07-23 00:21:10.365240] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:55.727 [2024-07-23 00:21:10.365250] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:55.727 [2024-07-23 00:21:10.365293] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:16:55.727 [2024-07-23 00:21:10.365303] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:16:55.727 [2024-07-23 00:21:10.365312] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:16:55.727 [2024-07-23 00:21:10.365323] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:16:55.727 [2024-07-23 00:21:10.365332] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:16:55.727 [2024-07-23 00:21:10.365341] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:55.727 [2024-07-23 00:21:10.365351] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:16:55.727 [2024-07-23 00:21:10.365360] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:16:55.727 [2024-07-23 00:21:10.365372] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:55.727 [2024-07-23 00:21:10.365382] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:16:55.727 [2024-07-23 00:21:10.365392] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:16:55.727 [2024-07-23 00:21:10.365401] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:55.727 [2024-07-23 00:21:10.365410] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:16:55.727 [2024-07-23 00:21:10.365419] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:16:55.727 [2024-07-23 00:21:10.365427] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:55.727 [2024-07-23 00:21:10.365436] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:16:55.727 [2024-07-23 00:21:10.365445] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:16:55.727 [2024-07-23 00:21:10.365454] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:55.727 [2024-07-23 00:21:10.365463] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:16:55.727 [2024-07-23 00:21:10.365472] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:16:55.727 [2024-07-23 00:21:10.365481] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:55.727 [2024-07-23 00:21:10.365490] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:16:55.727 [2024-07-23 00:21:10.365498] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:16:55.727 [2024-07-23 00:21:10.365507] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:55.727 [2024-07-23 00:21:10.365523] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:16:55.727 [2024-07-23 00:21:10.365532] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:16:55.727 [2024-07-23 00:21:10.365541] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:55.727 [2024-07-23 00:21:10.365550] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:16:55.727 [2024-07-23 00:21:10.365558] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:16:55.727 [2024-07-23 00:21:10.365567] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:55.727 [2024-07-23 00:21:10.365575] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:16:55.727 [2024-07-23 00:21:10.365584] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:16:55.727 [2024-07-23 00:21:10.365593] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:55.727 [2024-07-23 00:21:10.365602] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:16:55.727 [2024-07-23 00:21:10.365610] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:16:55.727 [2024-07-23 00:21:10.365619] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:55.727 [2024-07-23 00:21:10.365628] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:16:55.727 [2024-07-23 00:21:10.365637] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:16:55.727 [2024-07-23 00:21:10.365646] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:55.727 [2024-07-23 00:21:10.365654] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:16:55.727 [2024-07-23 00:21:10.365667] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:16:55.727 [2024-07-23 00:21:10.365679] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:55.727 [2024-07-23 00:21:10.365689] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:55.727 [2024-07-23 00:21:10.365705] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:16:55.727 [2024-07-23 00:21:10.365715] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:16:55.727 [2024-07-23 00:21:10.365725] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:16:55.727 [2024-07-23 00:21:10.365734] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:16:55.727 [2024-07-23 00:21:10.365743] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:16:55.727 [2024-07-23 00:21:10.365752] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:16:55.727 [2024-07-23 00:21:10.365762] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:16:55.727 [2024-07-23 00:21:10.365774] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:55.727 [2024-07-23 00:21:10.365786] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:16:55.727 [2024-07-23 00:21:10.365796] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:16:55.727 [2024-07-23 00:21:10.365807] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:16:55.727 [2024-07-23 00:21:10.365817] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:16:55.727 [2024-07-23 00:21:10.365827] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:16:55.727 [2024-07-23 00:21:10.365841] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:16:55.727 [2024-07-23 00:21:10.365851] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:16:55.727 [2024-07-23 00:21:10.365861] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:16:55.727 [2024-07-23 00:21:10.365871] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:16:55.727 [2024-07-23 00:21:10.365881] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:16:55.727 [2024-07-23 00:21:10.365891] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:16:55.727 [2024-07-23 00:21:10.365901] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:16:55.727 [2024-07-23 00:21:10.365911] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:16:55.727 [2024-07-23 00:21:10.365922] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:16:55.727 [2024-07-23 00:21:10.365931] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:16:55.727 [2024-07-23 00:21:10.365945] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:55.727 [2024-07-23 00:21:10.365964] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:16:55.727 [2024-07-23 00:21:10.365975] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:16:55.727 [2024-07-23 00:21:10.365985] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:16:55.727 [2024-07-23 00:21:10.365995] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:16:55.727 [2024-07-23 00:21:10.366006] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:55.727 [2024-07-23 00:21:10.366026] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:16:55.727 [2024-07-23 00:21:10.366037] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.848 ms 00:16:55.727 [2024-07-23 00:21:10.366046] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:55.728 [2024-07-23 00:21:10.388814] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:55.728 [2024-07-23 00:21:10.388856] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:55.728 [2024-07-23 00:21:10.388874] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.751 ms 00:16:55.728 [2024-07-23 00:21:10.388892] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:55.728 [2024-07-23 00:21:10.389038] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:55.728 [2024-07-23 00:21:10.389079] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:16:55.728 [2024-07-23 00:21:10.389093] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:16:55.728 [2024-07-23 00:21:10.389106] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:55.728 [2024-07-23 00:21:10.399835] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:55.728 [2024-07-23 00:21:10.399870] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:55.728 [2024-07-23 00:21:10.399884] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.715 ms 00:16:55.728 [2024-07-23 00:21:10.399906] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:55.728 [2024-07-23 00:21:10.399972] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:55.728 [2024-07-23 00:21:10.399984] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:55.728 [2024-07-23 00:21:10.399995] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:16:55.728 [2024-07-23 00:21:10.400005] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:55.728 [2024-07-23 00:21:10.400473] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:55.728 [2024-07-23 00:21:10.400490] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:55.728 [2024-07-23 00:21:10.400501] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.450 ms 00:16:55.728 [2024-07-23 00:21:10.400510] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:55.728 [2024-07-23 00:21:10.400631] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:55.728 [2024-07-23 00:21:10.400644] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:55.728 [2024-07-23 00:21:10.400654] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.095 ms 00:16:55.728 [2024-07-23 00:21:10.400664] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:55.988 [2024-07-23 00:21:10.406956] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:55.988 [2024-07-23 00:21:10.406988] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:55.988 [2024-07-23 00:21:10.407000] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.279 ms 00:16:55.988 [2024-07-23 00:21:10.407011] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:55.988 [2024-07-23 00:21:10.409653] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:16:55.988 [2024-07-23 00:21:10.409688] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:16:55.988 [2024-07-23 00:21:10.409703] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:55.988 [2024-07-23 00:21:10.409717] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:16:55.988 [2024-07-23 00:21:10.409727] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.599 ms 00:16:55.988 [2024-07-23 00:21:10.409737] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:55.988 [2024-07-23 00:21:10.422362] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:55.988 [2024-07-23 00:21:10.422408] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:16:55.988 [2024-07-23 00:21:10.422432] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.598 ms 00:16:55.988 [2024-07-23 00:21:10.422462] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:55.988 [2024-07-23 00:21:10.424377] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:55.988 [2024-07-23 00:21:10.424409] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:16:55.988 [2024-07-23 00:21:10.424421] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.839 ms 00:16:55.988 [2024-07-23 00:21:10.424430] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:55.988 [2024-07-23 00:21:10.425863] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:55.988 [2024-07-23 00:21:10.425898] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:16:55.988 [2024-07-23 00:21:10.425910] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.391 ms 00:16:55.988 [2024-07-23 00:21:10.425919] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:55.988 [2024-07-23 00:21:10.426206] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:55.988 [2024-07-23 00:21:10.426227] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:16:55.988 [2024-07-23 00:21:10.426239] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.217 ms 00:16:55.988 [2024-07-23 00:21:10.426249] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:55.988 [2024-07-23 00:21:10.446845] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:55.988 [2024-07-23 00:21:10.446910] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:16:55.988 [2024-07-23 00:21:10.446927] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.584 ms 00:16:55.988 [2024-07-23 00:21:10.446952] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:55.988 [2024-07-23 00:21:10.453117] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:16:55.988 [2024-07-23 00:21:10.469582] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:55.988 [2024-07-23 00:21:10.469632] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:16:55.988 [2024-07-23 00:21:10.469648] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.583 ms 00:16:55.988 [2024-07-23 00:21:10.469674] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:55.988 [2024-07-23 00:21:10.469782] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:55.988 [2024-07-23 00:21:10.469794] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:16:55.988 [2024-07-23 00:21:10.469811] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:16:55.988 [2024-07-23 00:21:10.469821] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:55.988 [2024-07-23 00:21:10.469876] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:55.988 [2024-07-23 00:21:10.469888] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:16:55.988 [2024-07-23 00:21:10.469898] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:16:55.988 [2024-07-23 00:21:10.469919] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:55.988 [2024-07-23 00:21:10.469943] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:55.988 [2024-07-23 00:21:10.469953] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:16:55.988 [2024-07-23 00:21:10.469975] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:16:55.988 [2024-07-23 00:21:10.469988] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:55.988 [2024-07-23 00:21:10.470022] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:16:55.988 [2024-07-23 00:21:10.470035] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:55.988 [2024-07-23 00:21:10.470045] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:16:55.988 [2024-07-23 00:21:10.470062] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:16:55.988 [2024-07-23 00:21:10.470072] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:55.988 [2024-07-23 00:21:10.473954] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:55.988 [2024-07-23 00:21:10.473991] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:16:55.988 [2024-07-23 00:21:10.474004] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.866 ms 00:16:55.988 [2024-07-23 00:21:10.474020] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:55.988 [2024-07-23 00:21:10.474108] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:55.988 [2024-07-23 00:21:10.474121] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:16:55.988 [2024-07-23 00:21:10.474142] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:16:55.988 [2024-07-23 00:21:10.474151] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:55.988 [2024-07-23 00:21:10.475156] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:16:55.988 [2024-07-23 00:21:10.476123] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 126.422 ms, result 0 00:16:55.988 [2024-07-23 00:21:10.476839] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:16:55.988 [2024-07-23 00:21:10.486545] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:16:55.988  Copying: 4096/4096 [kB] (average 24 MBps)[2024-07-23 00:21:10.650910] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:16:55.988 [2024-07-23 00:21:10.651974] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:55.988 [2024-07-23 00:21:10.652007] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:16:55.988 [2024-07-23 00:21:10.652021] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:16:55.988 [2024-07-23 00:21:10.652031] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:55.988 [2024-07-23 00:21:10.652051] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:16:55.988 [2024-07-23 00:21:10.652719] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:55.988 [2024-07-23 00:21:10.652735] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:16:55.988 [2024-07-23 00:21:10.652746] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.655 ms 00:16:55.988 [2024-07-23 00:21:10.652755] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:55.988 [2024-07-23 00:21:10.654232] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:55.988 [2024-07-23 00:21:10.654283] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:16:55.988 [2024-07-23 00:21:10.654311] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.459 ms 00:16:55.988 [2024-07-23 00:21:10.654327] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:55.988 [2024-07-23 00:21:10.657501] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:55.988 [2024-07-23 00:21:10.657532] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:16:55.988 [2024-07-23 00:21:10.657544] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.159 ms 00:16:55.988 [2024-07-23 00:21:10.657554] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:55.988 [2024-07-23 00:21:10.663131] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:55.988 [2024-07-23 00:21:10.663162] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:16:55.988 [2024-07-23 00:21:10.663179] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.539 ms 00:16:55.988 [2024-07-23 00:21:10.663188] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:55.988 [2024-07-23 00:21:10.664646] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:55.988 [2024-07-23 00:21:10.664679] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:16:55.988 [2024-07-23 00:21:10.664690] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.411 ms 00:16:55.988 [2024-07-23 00:21:10.664700] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.249 [2024-07-23 00:21:10.668448] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.249 [2024-07-23 00:21:10.668483] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:16:56.249 [2024-07-23 00:21:10.668495] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.724 ms 00:16:56.249 [2024-07-23 00:21:10.668505] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.249 [2024-07-23 00:21:10.668610] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.249 [2024-07-23 00:21:10.668622] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:16:56.249 [2024-07-23 00:21:10.668636] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.075 ms 00:16:56.249 [2024-07-23 00:21:10.668646] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.249 [2024-07-23 00:21:10.670725] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.249 [2024-07-23 00:21:10.670757] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:16:56.249 [2024-07-23 00:21:10.670768] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.065 ms 00:16:56.249 [2024-07-23 00:21:10.670777] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.249 [2024-07-23 00:21:10.672313] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.249 [2024-07-23 00:21:10.672345] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:16:56.249 [2024-07-23 00:21:10.672356] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.508 ms 00:16:56.249 [2024-07-23 00:21:10.672365] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.249 [2024-07-23 00:21:10.673610] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.249 [2024-07-23 00:21:10.673642] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:16:56.249 [2024-07-23 00:21:10.673653] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.217 ms 00:16:56.249 [2024-07-23 00:21:10.673662] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.249 [2024-07-23 00:21:10.674913] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.249 [2024-07-23 00:21:10.674947] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:16:56.249 [2024-07-23 00:21:10.674958] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.199 ms 00:16:56.249 [2024-07-23 00:21:10.674967] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.249 [2024-07-23 00:21:10.674993] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:16:56.249 [2024-07-23 00:21:10.675009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:16:56.249 [2024-07-23 00:21:10.675022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:16:56.249 [2024-07-23 00:21:10.675033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:16:56.249 [2024-07-23 00:21:10.675043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:16:56.249 [2024-07-23 00:21:10.675053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:16:56.249 [2024-07-23 00:21:10.675064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:16:56.249 [2024-07-23 00:21:10.675074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:16:56.249 [2024-07-23 00:21:10.675085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:16:56.249 [2024-07-23 00:21:10.675095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:16:56.249 [2024-07-23 00:21:10.675106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:16:56.249 [2024-07-23 00:21:10.675117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:16:56.249 [2024-07-23 00:21:10.675127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:16:56.249 [2024-07-23 00:21:10.675137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:16:56.249 [2024-07-23 00:21:10.675147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:16:56.249 [2024-07-23 00:21:10.675157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:16:56.249 [2024-07-23 00:21:10.675167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:16:56.249 [2024-07-23 00:21:10.675177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:16:56.249 [2024-07-23 00:21:10.675188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:16:56.249 [2024-07-23 00:21:10.675198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:16:56.249 [2024-07-23 00:21:10.675208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:16:56.249 [2024-07-23 00:21:10.675218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:16:56.249 [2024-07-23 00:21:10.675228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:16:56.249 [2024-07-23 00:21:10.675238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:16:56.249 [2024-07-23 00:21:10.675248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:16:56.249 [2024-07-23 00:21:10.675259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:16:56.249 [2024-07-23 00:21:10.675283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:16:56.249 [2024-07-23 00:21:10.675295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:16:56.249 [2024-07-23 00:21:10.675305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:16:56.249 [2024-07-23 00:21:10.675316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:16:56.249 [2024-07-23 00:21:10.675326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:16:56.249 [2024-07-23 00:21:10.675336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:16:56.249 [2024-07-23 00:21:10.675347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:16:56.249 [2024-07-23 00:21:10.675358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:16:56.249 [2024-07-23 00:21:10.675368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:16:56.249 [2024-07-23 00:21:10.675379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:16:56.249 [2024-07-23 00:21:10.675389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:16:56.249 [2024-07-23 00:21:10.675400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:16:56.249 [2024-07-23 00:21:10.675410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:16:56.249 [2024-07-23 00:21:10.675420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:16:56.249 [2024-07-23 00:21:10.675430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:16:56.249 [2024-07-23 00:21:10.675440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:16:56.249 [2024-07-23 00:21:10.675451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:16:56.249 [2024-07-23 00:21:10.675474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:16:56.249 [2024-07-23 00:21:10.675484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:16:56.249 [2024-07-23 00:21:10.675494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:16:56.249 [2024-07-23 00:21:10.675504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:16:56.249 [2024-07-23 00:21:10.675515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:16:56.249 [2024-07-23 00:21:10.675525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:16:56.249 [2024-07-23 00:21:10.675536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:16:56.249 [2024-07-23 00:21:10.675546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:16:56.249 [2024-07-23 00:21:10.675557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:16:56.249 [2024-07-23 00:21:10.675567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:16:56.250 [2024-07-23 00:21:10.675577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:16:56.250 [2024-07-23 00:21:10.675588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:16:56.250 [2024-07-23 00:21:10.675599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:16:56.250 [2024-07-23 00:21:10.675610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:16:56.250 [2024-07-23 00:21:10.675621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:16:56.250 [2024-07-23 00:21:10.675631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:16:56.250 [2024-07-23 00:21:10.675641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:16:56.250 [2024-07-23 00:21:10.675651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:16:56.250 [2024-07-23 00:21:10.675661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:16:56.250 [2024-07-23 00:21:10.675672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:16:56.250 [2024-07-23 00:21:10.675682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:16:56.250 [2024-07-23 00:21:10.675692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:16:56.250 [2024-07-23 00:21:10.675703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:16:56.250 [2024-07-23 00:21:10.675713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:16:56.250 [2024-07-23 00:21:10.675723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:16:56.250 [2024-07-23 00:21:10.675733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:16:56.250 [2024-07-23 00:21:10.675754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:16:56.250 [2024-07-23 00:21:10.675764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:16:56.250 [2024-07-23 00:21:10.675774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:16:56.250 [2024-07-23 00:21:10.675784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:16:56.250 [2024-07-23 00:21:10.675794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:16:56.250 [2024-07-23 00:21:10.675805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:16:56.250 [2024-07-23 00:21:10.675815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:16:56.250 [2024-07-23 00:21:10.675825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:16:56.250 [2024-07-23 00:21:10.675835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:16:56.250 [2024-07-23 00:21:10.675844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:16:56.250 [2024-07-23 00:21:10.675854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:16:56.250 [2024-07-23 00:21:10.675864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:16:56.250 [2024-07-23 00:21:10.675874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:16:56.250 [2024-07-23 00:21:10.675884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:16:56.250 [2024-07-23 00:21:10.675894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:16:56.250 [2024-07-23 00:21:10.675904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:16:56.250 [2024-07-23 00:21:10.675914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:16:56.250 [2024-07-23 00:21:10.675924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:16:56.250 [2024-07-23 00:21:10.675934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:16:56.250 [2024-07-23 00:21:10.675944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:16:56.250 [2024-07-23 00:21:10.675954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:16:56.250 [2024-07-23 00:21:10.675965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:16:56.250 [2024-07-23 00:21:10.675975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:16:56.250 [2024-07-23 00:21:10.675985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:16:56.250 [2024-07-23 00:21:10.675994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:16:56.250 [2024-07-23 00:21:10.676004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:16:56.250 [2024-07-23 00:21:10.676014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:16:56.250 [2024-07-23 00:21:10.676024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:16:56.250 [2024-07-23 00:21:10.676036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:16:56.250 [2024-07-23 00:21:10.676047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:16:56.250 [2024-07-23 00:21:10.676057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:16:56.250 [2024-07-23 00:21:10.676067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:16:56.250 [2024-07-23 00:21:10.676084] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:16:56.250 [2024-07-23 00:21:10.676093] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 05fcf3be-df9c-41ef-ae3d-41db770ccbb0 00:16:56.250 [2024-07-23 00:21:10.676104] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:16:56.250 [2024-07-23 00:21:10.676114] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:16:56.250 [2024-07-23 00:21:10.676123] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:16:56.250 [2024-07-23 00:21:10.676132] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:16:56.250 [2024-07-23 00:21:10.676141] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:16:56.250 [2024-07-23 00:21:10.676155] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:16:56.250 [2024-07-23 00:21:10.676164] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:16:56.250 [2024-07-23 00:21:10.676172] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:16:56.250 [2024-07-23 00:21:10.676181] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:16:56.250 [2024-07-23 00:21:10.676190] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.250 [2024-07-23 00:21:10.676200] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:16:56.250 [2024-07-23 00:21:10.676210] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.200 ms 00:16:56.250 [2024-07-23 00:21:10.676231] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.250 [2024-07-23 00:21:10.677976] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.250 [2024-07-23 00:21:10.677999] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:16:56.250 [2024-07-23 00:21:10.678010] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.730 ms 00:16:56.250 [2024-07-23 00:21:10.678023] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.250 [2024-07-23 00:21:10.678126] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.250 [2024-07-23 00:21:10.678136] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:16:56.250 [2024-07-23 00:21:10.678153] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.083 ms 00:16:56.250 [2024-07-23 00:21:10.678162] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.250 [2024-07-23 00:21:10.684426] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:56.250 [2024-07-23 00:21:10.684447] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:56.250 [2024-07-23 00:21:10.684463] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:56.250 [2024-07-23 00:21:10.684473] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.250 [2024-07-23 00:21:10.684537] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:56.250 [2024-07-23 00:21:10.684548] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:56.250 [2024-07-23 00:21:10.684558] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:56.250 [2024-07-23 00:21:10.684567] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.250 [2024-07-23 00:21:10.684606] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:56.250 [2024-07-23 00:21:10.684618] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:56.250 [2024-07-23 00:21:10.684635] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:56.250 [2024-07-23 00:21:10.684648] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.250 [2024-07-23 00:21:10.684666] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:56.250 [2024-07-23 00:21:10.684682] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:56.250 [2024-07-23 00:21:10.684692] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:56.250 [2024-07-23 00:21:10.684707] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.250 [2024-07-23 00:21:10.696971] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:56.250 [2024-07-23 00:21:10.697010] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:56.250 [2024-07-23 00:21:10.697023] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:56.250 [2024-07-23 00:21:10.697053] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.250 [2024-07-23 00:21:10.705229] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:56.250 [2024-07-23 00:21:10.705275] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:56.250 [2024-07-23 00:21:10.705289] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:56.250 [2024-07-23 00:21:10.705299] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.250 [2024-07-23 00:21:10.705325] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:56.250 [2024-07-23 00:21:10.705336] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:56.250 [2024-07-23 00:21:10.705347] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:56.250 [2024-07-23 00:21:10.705356] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.251 [2024-07-23 00:21:10.705392] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:56.251 [2024-07-23 00:21:10.705403] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:56.251 [2024-07-23 00:21:10.705413] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:56.251 [2024-07-23 00:21:10.705423] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.251 [2024-07-23 00:21:10.705496] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:56.251 [2024-07-23 00:21:10.705517] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:56.251 [2024-07-23 00:21:10.705528] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:56.251 [2024-07-23 00:21:10.705538] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.251 [2024-07-23 00:21:10.705576] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:56.251 [2024-07-23 00:21:10.705592] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:16:56.251 [2024-07-23 00:21:10.705602] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:56.251 [2024-07-23 00:21:10.705623] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.251 [2024-07-23 00:21:10.705672] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:56.251 [2024-07-23 00:21:10.705685] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:56.251 [2024-07-23 00:21:10.705695] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:56.251 [2024-07-23 00:21:10.705712] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.251 [2024-07-23 00:21:10.705760] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:56.251 [2024-07-23 00:21:10.705771] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:56.251 [2024-07-23 00:21:10.705782] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:56.251 [2024-07-23 00:21:10.705806] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.251 [2024-07-23 00:21:10.705942] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 54.025 ms, result 0 00:16:56.510 00:16:56.510 00:16:56.510 00:21:10 ftl.ftl_trim -- ftl/trim.sh@93 -- # svcpid=89191 00:16:56.510 00:21:10 ftl.ftl_trim -- ftl/trim.sh@92 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:16:56.510 00:21:10 ftl.ftl_trim -- ftl/trim.sh@94 -- # waitforlisten 89191 00:16:56.510 00:21:10 ftl.ftl_trim -- common/autotest_common.sh@827 -- # '[' -z 89191 ']' 00:16:56.510 00:21:10 ftl.ftl_trim -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:56.510 00:21:10 ftl.ftl_trim -- common/autotest_common.sh@832 -- # local max_retries=100 00:16:56.510 00:21:10 ftl.ftl_trim -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:56.510 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:56.510 00:21:10 ftl.ftl_trim -- common/autotest_common.sh@836 -- # xtrace_disable 00:16:56.510 00:21:10 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:16:56.510 [2024-07-23 00:21:11.050125] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:16:56.510 [2024-07-23 00:21:11.050281] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89191 ] 00:16:56.769 [2024-07-23 00:21:11.201841] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:56.769 [2024-07-23 00:21:11.244121] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:57.337 00:21:11 ftl.ftl_trim -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:16:57.337 00:21:11 ftl.ftl_trim -- common/autotest_common.sh@860 -- # return 0 00:16:57.337 00:21:11 ftl.ftl_trim -- ftl/trim.sh@96 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:16:57.337 [2024-07-23 00:21:12.011836] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:16:57.337 [2024-07-23 00:21:12.011895] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:16:57.598 [2024-07-23 00:21:12.179209] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:57.598 [2024-07-23 00:21:12.179256] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:16:57.598 [2024-07-23 00:21:12.179297] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:16:57.598 [2024-07-23 00:21:12.179307] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:57.598 [2024-07-23 00:21:12.181667] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:57.598 [2024-07-23 00:21:12.181703] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:57.598 [2024-07-23 00:21:12.181721] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.334 ms 00:16:57.598 [2024-07-23 00:21:12.181731] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:57.598 [2024-07-23 00:21:12.181812] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:16:57.598 [2024-07-23 00:21:12.182025] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:16:57.598 [2024-07-23 00:21:12.182048] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:57.598 [2024-07-23 00:21:12.182058] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:57.598 [2024-07-23 00:21:12.182072] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.242 ms 00:16:57.598 [2024-07-23 00:21:12.182090] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:57.598 [2024-07-23 00:21:12.183573] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:16:57.598 [2024-07-23 00:21:12.186039] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:57.598 [2024-07-23 00:21:12.186075] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:16:57.598 [2024-07-23 00:21:12.186088] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.475 ms 00:16:57.598 [2024-07-23 00:21:12.186100] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:57.598 [2024-07-23 00:21:12.186163] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:57.598 [2024-07-23 00:21:12.186178] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:16:57.598 [2024-07-23 00:21:12.186203] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:16:57.598 [2024-07-23 00:21:12.186218] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:57.598 [2024-07-23 00:21:12.192991] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:57.598 [2024-07-23 00:21:12.193017] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:57.598 [2024-07-23 00:21:12.193028] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.719 ms 00:16:57.598 [2024-07-23 00:21:12.193041] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:57.598 [2024-07-23 00:21:12.193189] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:57.598 [2024-07-23 00:21:12.193207] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:57.598 [2024-07-23 00:21:12.193218] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.081 ms 00:16:57.598 [2024-07-23 00:21:12.193230] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:57.598 [2024-07-23 00:21:12.193264] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:57.598 [2024-07-23 00:21:12.193296] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:16:57.598 [2024-07-23 00:21:12.193307] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:16:57.598 [2024-07-23 00:21:12.193319] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:57.598 [2024-07-23 00:21:12.193350] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:16:57.598 [2024-07-23 00:21:12.194948] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:57.598 [2024-07-23 00:21:12.194978] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:57.598 [2024-07-23 00:21:12.194995] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.603 ms 00:16:57.598 [2024-07-23 00:21:12.195018] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:57.598 [2024-07-23 00:21:12.195065] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:57.598 [2024-07-23 00:21:12.195076] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:16:57.598 [2024-07-23 00:21:12.195089] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:16:57.598 [2024-07-23 00:21:12.195098] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:57.598 [2024-07-23 00:21:12.195121] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:16:57.598 [2024-07-23 00:21:12.195150] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:16:57.598 [2024-07-23 00:21:12.195185] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:16:57.598 [2024-07-23 00:21:12.195204] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:16:57.598 [2024-07-23 00:21:12.195313] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:16:57.598 [2024-07-23 00:21:12.195333] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:16:57.598 [2024-07-23 00:21:12.195352] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:16:57.598 [2024-07-23 00:21:12.195380] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:16:57.598 [2024-07-23 00:21:12.195395] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:16:57.598 [2024-07-23 00:21:12.195405] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:16:57.598 [2024-07-23 00:21:12.195420] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:16:57.598 [2024-07-23 00:21:12.195436] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:16:57.598 [2024-07-23 00:21:12.195448] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:16:57.598 [2024-07-23 00:21:12.195460] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:57.598 [2024-07-23 00:21:12.195473] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:16:57.598 [2024-07-23 00:21:12.195482] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.344 ms 00:16:57.598 [2024-07-23 00:21:12.195494] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:57.598 [2024-07-23 00:21:12.195564] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:57.598 [2024-07-23 00:21:12.195577] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:16:57.598 [2024-07-23 00:21:12.195586] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:16:57.598 [2024-07-23 00:21:12.195598] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:57.598 [2024-07-23 00:21:12.195679] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:16:57.598 [2024-07-23 00:21:12.195713] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:16:57.598 [2024-07-23 00:21:12.195723] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:57.598 [2024-07-23 00:21:12.195735] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:57.598 [2024-07-23 00:21:12.195745] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:16:57.598 [2024-07-23 00:21:12.195759] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:16:57.598 [2024-07-23 00:21:12.195768] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:16:57.598 [2024-07-23 00:21:12.195779] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:16:57.598 [2024-07-23 00:21:12.195788] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:16:57.598 [2024-07-23 00:21:12.195800] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:57.598 [2024-07-23 00:21:12.195809] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:16:57.598 [2024-07-23 00:21:12.195821] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:16:57.598 [2024-07-23 00:21:12.195830] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:57.598 [2024-07-23 00:21:12.195841] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:16:57.598 [2024-07-23 00:21:12.195850] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:16:57.598 [2024-07-23 00:21:12.195861] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:57.598 [2024-07-23 00:21:12.195870] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:16:57.598 [2024-07-23 00:21:12.195881] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:16:57.598 [2024-07-23 00:21:12.195891] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:57.598 [2024-07-23 00:21:12.195903] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:16:57.598 [2024-07-23 00:21:12.195913] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:16:57.598 [2024-07-23 00:21:12.195927] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:57.598 [2024-07-23 00:21:12.195935] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:16:57.598 [2024-07-23 00:21:12.195947] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:16:57.598 [2024-07-23 00:21:12.195955] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:57.599 [2024-07-23 00:21:12.195968] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:16:57.599 [2024-07-23 00:21:12.195977] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:16:57.599 [2024-07-23 00:21:12.195988] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:57.599 [2024-07-23 00:21:12.195997] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:16:57.599 [2024-07-23 00:21:12.196008] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:16:57.599 [2024-07-23 00:21:12.196017] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:57.599 [2024-07-23 00:21:12.196028] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:16:57.599 [2024-07-23 00:21:12.196037] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:16:57.599 [2024-07-23 00:21:12.196048] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:57.599 [2024-07-23 00:21:12.196057] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:16:57.599 [2024-07-23 00:21:12.196068] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:16:57.599 [2024-07-23 00:21:12.196077] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:57.599 [2024-07-23 00:21:12.196091] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:16:57.599 [2024-07-23 00:21:12.196100] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:16:57.599 [2024-07-23 00:21:12.196111] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:57.599 [2024-07-23 00:21:12.196120] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:16:57.599 [2024-07-23 00:21:12.196131] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:16:57.599 [2024-07-23 00:21:12.196140] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:57.599 [2024-07-23 00:21:12.196151] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:16:57.599 [2024-07-23 00:21:12.196160] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:16:57.599 [2024-07-23 00:21:12.196172] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:57.599 [2024-07-23 00:21:12.196188] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:57.599 [2024-07-23 00:21:12.196201] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:16:57.599 [2024-07-23 00:21:12.196211] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:16:57.599 [2024-07-23 00:21:12.196222] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:16:57.599 [2024-07-23 00:21:12.196231] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:16:57.599 [2024-07-23 00:21:12.196244] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:16:57.599 [2024-07-23 00:21:12.196254] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:16:57.599 [2024-07-23 00:21:12.196268] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:16:57.599 [2024-07-23 00:21:12.196280] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:57.599 [2024-07-23 00:21:12.196294] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:16:57.599 [2024-07-23 00:21:12.196332] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:16:57.599 [2024-07-23 00:21:12.196345] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:16:57.599 [2024-07-23 00:21:12.196356] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:16:57.599 [2024-07-23 00:21:12.196368] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:16:57.599 [2024-07-23 00:21:12.196378] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:16:57.599 [2024-07-23 00:21:12.196390] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:16:57.599 [2024-07-23 00:21:12.196400] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:16:57.599 [2024-07-23 00:21:12.196412] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:16:57.599 [2024-07-23 00:21:12.196422] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:16:57.599 [2024-07-23 00:21:12.196435] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:16:57.599 [2024-07-23 00:21:12.196445] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:16:57.599 [2024-07-23 00:21:12.196458] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:16:57.599 [2024-07-23 00:21:12.196468] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:16:57.599 [2024-07-23 00:21:12.196482] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:16:57.599 [2024-07-23 00:21:12.196494] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:57.599 [2024-07-23 00:21:12.196509] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:16:57.599 [2024-07-23 00:21:12.196520] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:16:57.599 [2024-07-23 00:21:12.196533] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:16:57.599 [2024-07-23 00:21:12.196542] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:16:57.599 [2024-07-23 00:21:12.196557] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:57.599 [2024-07-23 00:21:12.196568] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:16:57.599 [2024-07-23 00:21:12.196580] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.924 ms 00:16:57.599 [2024-07-23 00:21:12.196589] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:57.599 [2024-07-23 00:21:12.208659] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:57.599 [2024-07-23 00:21:12.208692] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:57.599 [2024-07-23 00:21:12.208717] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.014 ms 00:16:57.599 [2024-07-23 00:21:12.208728] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:57.599 [2024-07-23 00:21:12.208845] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:57.599 [2024-07-23 00:21:12.208860] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:16:57.599 [2024-07-23 00:21:12.208876] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:16:57.599 [2024-07-23 00:21:12.208886] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:57.599 [2024-07-23 00:21:12.219794] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:57.599 [2024-07-23 00:21:12.219827] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:57.599 [2024-07-23 00:21:12.219851] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.899 ms 00:16:57.599 [2024-07-23 00:21:12.219862] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:57.599 [2024-07-23 00:21:12.219942] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:57.599 [2024-07-23 00:21:12.219955] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:57.599 [2024-07-23 00:21:12.219968] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:16:57.599 [2024-07-23 00:21:12.219977] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:57.599 [2024-07-23 00:21:12.220422] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:57.599 [2024-07-23 00:21:12.220439] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:57.599 [2024-07-23 00:21:12.220453] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.421 ms 00:16:57.599 [2024-07-23 00:21:12.220463] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:57.599 [2024-07-23 00:21:12.220580] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:57.599 [2024-07-23 00:21:12.220596] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:57.599 [2024-07-23 00:21:12.220613] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.093 ms 00:16:57.599 [2024-07-23 00:21:12.220623] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:57.599 [2024-07-23 00:21:12.227755] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:57.599 [2024-07-23 00:21:12.227787] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:57.599 [2024-07-23 00:21:12.227802] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.117 ms 00:16:57.599 [2024-07-23 00:21:12.227812] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:57.599 [2024-07-23 00:21:12.230480] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:16:57.599 [2024-07-23 00:21:12.230514] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:16:57.599 [2024-07-23 00:21:12.230533] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:57.599 [2024-07-23 00:21:12.230543] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:16:57.599 [2024-07-23 00:21:12.230556] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.618 ms 00:16:57.599 [2024-07-23 00:21:12.230566] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:57.599 [2024-07-23 00:21:12.243275] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:57.599 [2024-07-23 00:21:12.243311] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:16:57.599 [2024-07-23 00:21:12.243327] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.654 ms 00:16:57.599 [2024-07-23 00:21:12.243337] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:57.599 [2024-07-23 00:21:12.245156] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:57.599 [2024-07-23 00:21:12.245190] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:16:57.599 [2024-07-23 00:21:12.245205] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.709 ms 00:16:57.599 [2024-07-23 00:21:12.245215] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:57.599 [2024-07-23 00:21:12.246732] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:57.599 [2024-07-23 00:21:12.246764] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:16:57.599 [2024-07-23 00:21:12.246778] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.472 ms 00:16:57.600 [2024-07-23 00:21:12.246788] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:57.600 [2024-07-23 00:21:12.247075] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:57.600 [2024-07-23 00:21:12.247093] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:16:57.600 [2024-07-23 00:21:12.247107] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.215 ms 00:16:57.600 [2024-07-23 00:21:12.247125] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:57.600 [2024-07-23 00:21:12.275072] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:57.600 [2024-07-23 00:21:12.275124] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:16:57.600 [2024-07-23 00:21:12.275144] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.961 ms 00:16:57.600 [2024-07-23 00:21:12.275154] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:57.859 [2024-07-23 00:21:12.281462] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:16:57.859 [2024-07-23 00:21:12.297311] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:57.859 [2024-07-23 00:21:12.297356] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:16:57.859 [2024-07-23 00:21:12.297371] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.093 ms 00:16:57.859 [2024-07-23 00:21:12.297384] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:57.859 [2024-07-23 00:21:12.297488] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:57.859 [2024-07-23 00:21:12.297511] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:16:57.859 [2024-07-23 00:21:12.297522] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:16:57.859 [2024-07-23 00:21:12.297539] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:57.859 [2024-07-23 00:21:12.297601] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:57.859 [2024-07-23 00:21:12.297616] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:16:57.859 [2024-07-23 00:21:12.297627] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:16:57.859 [2024-07-23 00:21:12.297649] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:57.859 [2024-07-23 00:21:12.297675] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:57.860 [2024-07-23 00:21:12.297689] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:16:57.860 [2024-07-23 00:21:12.297699] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:16:57.860 [2024-07-23 00:21:12.297714] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:57.860 [2024-07-23 00:21:12.297750] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:16:57.860 [2024-07-23 00:21:12.297763] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:57.860 [2024-07-23 00:21:12.297773] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:16:57.860 [2024-07-23 00:21:12.297786] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:16:57.860 [2024-07-23 00:21:12.297796] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:57.860 [2024-07-23 00:21:12.301531] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:57.860 [2024-07-23 00:21:12.301574] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:16:57.860 [2024-07-23 00:21:12.301590] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.711 ms 00:16:57.860 [2024-07-23 00:21:12.301600] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:57.860 [2024-07-23 00:21:12.301693] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:57.860 [2024-07-23 00:21:12.301706] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:16:57.860 [2024-07-23 00:21:12.301727] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:16:57.860 [2024-07-23 00:21:12.301737] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:57.860 [2024-07-23 00:21:12.302696] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:16:57.860 [2024-07-23 00:21:12.303639] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 123.403 ms, result 0 00:16:57.860 [2024-07-23 00:21:12.304683] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:16:57.860 Some configs were skipped because the RPC state that can call them passed over. 00:16:57.860 00:21:12 ftl.ftl_trim -- ftl/trim.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:16:57.860 [2024-07-23 00:21:12.523383] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:57.860 [2024-07-23 00:21:12.523559] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:16:57.860 [2024-07-23 00:21:12.523637] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.692 ms 00:16:57.860 [2024-07-23 00:21:12.523677] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:57.860 [2024-07-23 00:21:12.523744] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 2.056 ms, result 0 00:16:57.860 true 00:16:57.860 00:21:12 ftl.ftl_trim -- ftl/trim.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:16:58.119 [2024-07-23 00:21:12.706781] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.119 [2024-07-23 00:21:12.706968] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:16:58.119 [2024-07-23 00:21:12.707084] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.208 ms 00:16:58.119 [2024-07-23 00:21:12.707123] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.119 [2024-07-23 00:21:12.707215] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.652 ms, result 0 00:16:58.119 true 00:16:58.119 00:21:12 ftl.ftl_trim -- ftl/trim.sh@102 -- # killprocess 89191 00:16:58.119 00:21:12 ftl.ftl_trim -- common/autotest_common.sh@946 -- # '[' -z 89191 ']' 00:16:58.119 00:21:12 ftl.ftl_trim -- common/autotest_common.sh@950 -- # kill -0 89191 00:16:58.119 00:21:12 ftl.ftl_trim -- common/autotest_common.sh@951 -- # uname 00:16:58.119 00:21:12 ftl.ftl_trim -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:16:58.120 00:21:12 ftl.ftl_trim -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 89191 00:16:58.120 00:21:12 ftl.ftl_trim -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:16:58.120 00:21:12 ftl.ftl_trim -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:16:58.120 00:21:12 ftl.ftl_trim -- common/autotest_common.sh@964 -- # echo 'killing process with pid 89191' 00:16:58.120 killing process with pid 89191 00:16:58.120 00:21:12 ftl.ftl_trim -- common/autotest_common.sh@965 -- # kill 89191 00:16:58.120 00:21:12 ftl.ftl_trim -- common/autotest_common.sh@970 -- # wait 89191 00:16:58.380 [2024-07-23 00:21:12.916724] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.381 [2024-07-23 00:21:12.916990] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:16:58.381 [2024-07-23 00:21:12.917114] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:16:58.381 [2024-07-23 00:21:12.917156] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.381 [2024-07-23 00:21:12.917219] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:16:58.381 [2024-07-23 00:21:12.917991] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.381 [2024-07-23 00:21:12.918096] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:16:58.381 [2024-07-23 00:21:12.918175] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.645 ms 00:16:58.381 [2024-07-23 00:21:12.918213] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.381 [2024-07-23 00:21:12.918512] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.381 [2024-07-23 00:21:12.918531] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:16:58.381 [2024-07-23 00:21:12.918545] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.248 ms 00:16:58.381 [2024-07-23 00:21:12.918555] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.381 [2024-07-23 00:21:12.921882] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.381 [2024-07-23 00:21:12.921916] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:16:58.381 [2024-07-23 00:21:12.921931] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.307 ms 00:16:58.381 [2024-07-23 00:21:12.921942] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.381 [2024-07-23 00:21:12.927573] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.381 [2024-07-23 00:21:12.927618] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:16:58.381 [2024-07-23 00:21:12.927632] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.600 ms 00:16:58.381 [2024-07-23 00:21:12.927641] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.381 [2024-07-23 00:21:12.929158] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.381 [2024-07-23 00:21:12.929195] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:16:58.381 [2024-07-23 00:21:12.929209] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.439 ms 00:16:58.381 [2024-07-23 00:21:12.929218] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.381 [2024-07-23 00:21:12.932952] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.381 [2024-07-23 00:21:12.932989] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:16:58.381 [2024-07-23 00:21:12.933004] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.700 ms 00:16:58.381 [2024-07-23 00:21:12.933015] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.381 [2024-07-23 00:21:12.933143] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.381 [2024-07-23 00:21:12.933156] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:16:58.381 [2024-07-23 00:21:12.933169] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.091 ms 00:16:58.381 [2024-07-23 00:21:12.933179] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.381 [2024-07-23 00:21:12.935223] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.381 [2024-07-23 00:21:12.935257] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:16:58.381 [2024-07-23 00:21:12.935286] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.022 ms 00:16:58.381 [2024-07-23 00:21:12.935296] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.381 [2024-07-23 00:21:12.936796] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.381 [2024-07-23 00:21:12.936832] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:16:58.381 [2024-07-23 00:21:12.936848] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.458 ms 00:16:58.381 [2024-07-23 00:21:12.936857] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.381 [2024-07-23 00:21:12.938151] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.381 [2024-07-23 00:21:12.938187] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:16:58.381 [2024-07-23 00:21:12.938201] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.256 ms 00:16:58.381 [2024-07-23 00:21:12.938211] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.381 [2024-07-23 00:21:12.939483] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.381 [2024-07-23 00:21:12.939515] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:16:58.381 [2024-07-23 00:21:12.939528] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.197 ms 00:16:58.381 [2024-07-23 00:21:12.939537] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.381 [2024-07-23 00:21:12.939573] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:16:58.381 [2024-07-23 00:21:12.939589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:16:58.381 [2024-07-23 00:21:12.939603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:16:58.381 [2024-07-23 00:21:12.939614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:16:58.381 [2024-07-23 00:21:12.939630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:16:58.381 [2024-07-23 00:21:12.939641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:16:58.381 [2024-07-23 00:21:12.939654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:16:58.381 [2024-07-23 00:21:12.939665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:16:58.381 [2024-07-23 00:21:12.939678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:16:58.381 [2024-07-23 00:21:12.939688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:16:58.381 [2024-07-23 00:21:12.939701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:16:58.381 [2024-07-23 00:21:12.939712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:16:58.381 [2024-07-23 00:21:12.939725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:16:58.381 [2024-07-23 00:21:12.939735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:16:58.381 [2024-07-23 00:21:12.939748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:16:58.381 [2024-07-23 00:21:12.939758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:16:58.381 [2024-07-23 00:21:12.939773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:16:58.381 [2024-07-23 00:21:12.939783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:16:58.381 [2024-07-23 00:21:12.939796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:16:58.381 [2024-07-23 00:21:12.939806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:16:58.381 [2024-07-23 00:21:12.939821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:16:58.381 [2024-07-23 00:21:12.939832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:16:58.381 [2024-07-23 00:21:12.939844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:16:58.381 [2024-07-23 00:21:12.939855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:16:58.381 [2024-07-23 00:21:12.939867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:16:58.381 [2024-07-23 00:21:12.939878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:16:58.381 [2024-07-23 00:21:12.939891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:16:58.381 [2024-07-23 00:21:12.939912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:16:58.381 [2024-07-23 00:21:12.939926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:16:58.381 [2024-07-23 00:21:12.939937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:16:58.381 [2024-07-23 00:21:12.939949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:16:58.381 [2024-07-23 00:21:12.939960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:16:58.381 [2024-07-23 00:21:12.939973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:16:58.381 [2024-07-23 00:21:12.939984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:16:58.381 [2024-07-23 00:21:12.939997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:16:58.381 [2024-07-23 00:21:12.940008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:16:58.381 [2024-07-23 00:21:12.940023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:16:58.381 [2024-07-23 00:21:12.940035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:16:58.381 [2024-07-23 00:21:12.940047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:16:58.381 [2024-07-23 00:21:12.940058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:16:58.381 [2024-07-23 00:21:12.940072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:16:58.381 [2024-07-23 00:21:12.940083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:16:58.381 [2024-07-23 00:21:12.940096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:16:58.381 [2024-07-23 00:21:12.940107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:16:58.381 [2024-07-23 00:21:12.940120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:16:58.381 [2024-07-23 00:21:12.940131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:16:58.381 [2024-07-23 00:21:12.940143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:16:58.381 [2024-07-23 00:21:12.940154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:16:58.381 [2024-07-23 00:21:12.940167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:16:58.382 [2024-07-23 00:21:12.940177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:16:58.382 [2024-07-23 00:21:12.940189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:16:58.382 [2024-07-23 00:21:12.940200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:16:58.382 [2024-07-23 00:21:12.940215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:16:58.382 [2024-07-23 00:21:12.940225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:16:58.382 [2024-07-23 00:21:12.940239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:16:58.382 [2024-07-23 00:21:12.940249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:16:58.382 [2024-07-23 00:21:12.940277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:16:58.382 [2024-07-23 00:21:12.940288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:16:58.382 [2024-07-23 00:21:12.940301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:16:58.382 [2024-07-23 00:21:12.940312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:16:58.382 [2024-07-23 00:21:12.940325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:16:58.382 [2024-07-23 00:21:12.940336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:16:58.382 [2024-07-23 00:21:12.940349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:16:58.382 [2024-07-23 00:21:12.940360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:16:58.382 [2024-07-23 00:21:12.940372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:16:58.382 [2024-07-23 00:21:12.940384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:16:58.382 [2024-07-23 00:21:12.940398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:16:58.382 [2024-07-23 00:21:12.940408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:16:58.382 [2024-07-23 00:21:12.940425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:16:58.382 [2024-07-23 00:21:12.940435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:16:58.382 [2024-07-23 00:21:12.940448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:16:58.382 [2024-07-23 00:21:12.940459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:16:58.382 [2024-07-23 00:21:12.940472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:16:58.382 [2024-07-23 00:21:12.940482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:16:58.382 [2024-07-23 00:21:12.940495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:16:58.382 [2024-07-23 00:21:12.940505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:16:58.382 [2024-07-23 00:21:12.940518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:16:58.382 [2024-07-23 00:21:12.940529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:16:58.382 [2024-07-23 00:21:12.940541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:16:58.382 [2024-07-23 00:21:12.940552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:16:58.382 [2024-07-23 00:21:12.940564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:16:58.382 [2024-07-23 00:21:12.940574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:16:58.382 [2024-07-23 00:21:12.940587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:16:58.382 [2024-07-23 00:21:12.940598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:16:58.382 [2024-07-23 00:21:12.940614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:16:58.382 [2024-07-23 00:21:12.940624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:16:58.382 [2024-07-23 00:21:12.940637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:16:58.382 [2024-07-23 00:21:12.940647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:16:58.382 [2024-07-23 00:21:12.940660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:16:58.382 [2024-07-23 00:21:12.940670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:16:58.382 [2024-07-23 00:21:12.940683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:16:58.382 [2024-07-23 00:21:12.940694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:16:58.382 [2024-07-23 00:21:12.940706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:16:58.382 [2024-07-23 00:21:12.940716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:16:58.382 [2024-07-23 00:21:12.940730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:16:58.382 [2024-07-23 00:21:12.940741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:16:58.382 [2024-07-23 00:21:12.940753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:16:58.382 [2024-07-23 00:21:12.940763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:16:58.382 [2024-07-23 00:21:12.940777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:16:58.382 [2024-07-23 00:21:12.940788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:16:58.382 [2024-07-23 00:21:12.940802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:16:58.382 [2024-07-23 00:21:12.940819] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:16:58.382 [2024-07-23 00:21:12.940831] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 05fcf3be-df9c-41ef-ae3d-41db770ccbb0 00:16:58.382 [2024-07-23 00:21:12.940842] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:16:58.382 [2024-07-23 00:21:12.940854] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:16:58.382 [2024-07-23 00:21:12.940866] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:16:58.382 [2024-07-23 00:21:12.940886] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:16:58.382 [2024-07-23 00:21:12.940895] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:16:58.382 [2024-07-23 00:21:12.940908] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:16:58.382 [2024-07-23 00:21:12.940917] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:16:58.382 [2024-07-23 00:21:12.940928] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:16:58.382 [2024-07-23 00:21:12.940938] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:16:58.382 [2024-07-23 00:21:12.940950] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.382 [2024-07-23 00:21:12.940959] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:16:58.382 [2024-07-23 00:21:12.940972] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.381 ms 00:16:58.382 [2024-07-23 00:21:12.940981] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.382 [2024-07-23 00:21:12.942957] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.382 [2024-07-23 00:21:12.943070] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:16:58.382 [2024-07-23 00:21:12.943142] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.938 ms 00:16:58.382 [2024-07-23 00:21:12.943177] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.382 [2024-07-23 00:21:12.943323] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.382 [2024-07-23 00:21:12.943360] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:16:58.382 [2024-07-23 00:21:12.943513] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.096 ms 00:16:58.382 [2024-07-23 00:21:12.943616] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.382 [2024-07-23 00:21:12.950663] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:58.382 [2024-07-23 00:21:12.950784] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:58.382 [2024-07-23 00:21:12.950858] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:58.382 [2024-07-23 00:21:12.950893] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.382 [2024-07-23 00:21:12.951001] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:58.382 [2024-07-23 00:21:12.951036] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:58.382 [2024-07-23 00:21:12.951068] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:58.382 [2024-07-23 00:21:12.951105] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.382 [2024-07-23 00:21:12.951181] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:58.382 [2024-07-23 00:21:12.951332] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:58.382 [2024-07-23 00:21:12.951442] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:58.382 [2024-07-23 00:21:12.951471] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.382 [2024-07-23 00:21:12.951518] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:58.382 [2024-07-23 00:21:12.951549] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:58.382 [2024-07-23 00:21:12.951589] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:58.382 [2024-07-23 00:21:12.951618] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.383 [2024-07-23 00:21:12.963636] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:58.383 [2024-07-23 00:21:12.963822] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:58.383 [2024-07-23 00:21:12.963908] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:58.383 [2024-07-23 00:21:12.963944] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.383 [2024-07-23 00:21:12.972259] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:58.383 [2024-07-23 00:21:12.972478] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:58.383 [2024-07-23 00:21:12.972556] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:58.383 [2024-07-23 00:21:12.972591] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.383 [2024-07-23 00:21:12.972664] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:58.383 [2024-07-23 00:21:12.972706] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:58.383 [2024-07-23 00:21:12.972743] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:58.383 [2024-07-23 00:21:12.972772] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.383 [2024-07-23 00:21:12.972827] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:58.383 [2024-07-23 00:21:12.972912] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:58.383 [2024-07-23 00:21:12.972953] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:58.383 [2024-07-23 00:21:12.972992] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.383 [2024-07-23 00:21:12.973113] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:58.383 [2024-07-23 00:21:12.973153] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:58.383 [2024-07-23 00:21:12.973186] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:58.383 [2024-07-23 00:21:12.973218] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.383 [2024-07-23 00:21:12.973345] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:58.383 [2024-07-23 00:21:12.973385] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:16:58.383 [2024-07-23 00:21:12.973419] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:58.383 [2024-07-23 00:21:12.973448] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.383 [2024-07-23 00:21:12.973520] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:58.383 [2024-07-23 00:21:12.973555] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:58.383 [2024-07-23 00:21:12.973587] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:58.383 [2024-07-23 00:21:12.973723] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.383 [2024-07-23 00:21:12.973792] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:58.383 [2024-07-23 00:21:12.973825] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:58.383 [2024-07-23 00:21:12.973857] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:58.383 [2024-07-23 00:21:12.973888] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.383 [2024-07-23 00:21:12.974076] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 57.404 ms, result 0 00:16:58.643 00:21:13 ftl.ftl_trim -- ftl/trim.sh@105 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:16:58.643 [2024-07-23 00:21:13.304627] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:16:58.643 [2024-07-23 00:21:13.305135] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89227 ] 00:16:58.901 [2024-07-23 00:21:13.457224] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:58.901 [2024-07-23 00:21:13.501324] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:59.162 [2024-07-23 00:21:13.623212] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:16:59.162 [2024-07-23 00:21:13.623299] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:16:59.162 [2024-07-23 00:21:13.775182] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:59.162 [2024-07-23 00:21:13.775236] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:16:59.162 [2024-07-23 00:21:13.775251] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:16:59.162 [2024-07-23 00:21:13.775296] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:59.162 [2024-07-23 00:21:13.777671] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:59.162 [2024-07-23 00:21:13.777714] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:59.162 [2024-07-23 00:21:13.777727] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.349 ms 00:16:59.162 [2024-07-23 00:21:13.777736] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:59.162 [2024-07-23 00:21:13.777815] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:16:59.162 [2024-07-23 00:21:13.778038] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:16:59.162 [2024-07-23 00:21:13.778057] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:59.162 [2024-07-23 00:21:13.778067] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:59.162 [2024-07-23 00:21:13.778081] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.250 ms 00:16:59.162 [2024-07-23 00:21:13.778092] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:59.162 [2024-07-23 00:21:13.779577] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:16:59.162 [2024-07-23 00:21:13.782157] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:59.162 [2024-07-23 00:21:13.782190] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:16:59.162 [2024-07-23 00:21:13.782202] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.584 ms 00:16:59.162 [2024-07-23 00:21:13.782212] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:59.162 [2024-07-23 00:21:13.782288] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:59.162 [2024-07-23 00:21:13.782303] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:16:59.162 [2024-07-23 00:21:13.782313] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:16:59.162 [2024-07-23 00:21:13.782334] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:59.162 [2024-07-23 00:21:13.789041] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:59.162 [2024-07-23 00:21:13.789075] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:59.162 [2024-07-23 00:21:13.789088] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.674 ms 00:16:59.162 [2024-07-23 00:21:13.789098] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:59.162 [2024-07-23 00:21:13.789220] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:59.162 [2024-07-23 00:21:13.789234] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:59.162 [2024-07-23 00:21:13.789246] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:16:59.162 [2024-07-23 00:21:13.789273] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:59.162 [2024-07-23 00:21:13.789306] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:59.162 [2024-07-23 00:21:13.789322] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:16:59.162 [2024-07-23 00:21:13.789332] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:16:59.162 [2024-07-23 00:21:13.789342] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:59.162 [2024-07-23 00:21:13.789364] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:16:59.162 [2024-07-23 00:21:13.790976] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:59.162 [2024-07-23 00:21:13.791005] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:59.162 [2024-07-23 00:21:13.791021] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.620 ms 00:16:59.162 [2024-07-23 00:21:13.791038] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:59.162 [2024-07-23 00:21:13.791087] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:59.162 [2024-07-23 00:21:13.791110] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:16:59.162 [2024-07-23 00:21:13.791121] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:16:59.162 [2024-07-23 00:21:13.791131] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:59.162 [2024-07-23 00:21:13.791158] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:16:59.162 [2024-07-23 00:21:13.791186] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:16:59.162 [2024-07-23 00:21:13.791230] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:16:59.162 [2024-07-23 00:21:13.791251] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:16:59.162 [2024-07-23 00:21:13.791344] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:16:59.162 [2024-07-23 00:21:13.791357] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:16:59.162 [2024-07-23 00:21:13.791371] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:16:59.162 [2024-07-23 00:21:13.791383] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:16:59.162 [2024-07-23 00:21:13.791395] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:16:59.162 [2024-07-23 00:21:13.791406] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:16:59.162 [2024-07-23 00:21:13.791416] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:16:59.162 [2024-07-23 00:21:13.791425] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:16:59.162 [2024-07-23 00:21:13.791439] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:16:59.162 [2024-07-23 00:21:13.791449] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:59.162 [2024-07-23 00:21:13.791459] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:16:59.162 [2024-07-23 00:21:13.791483] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.294 ms 00:16:59.162 [2024-07-23 00:21:13.791500] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:59.162 [2024-07-23 00:21:13.791578] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:59.162 [2024-07-23 00:21:13.791589] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:16:59.162 [2024-07-23 00:21:13.791607] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:16:59.162 [2024-07-23 00:21:13.791617] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:59.162 [2024-07-23 00:21:13.791709] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:16:59.162 [2024-07-23 00:21:13.791722] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:16:59.162 [2024-07-23 00:21:13.791732] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:59.162 [2024-07-23 00:21:13.791742] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:59.162 [2024-07-23 00:21:13.791752] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:16:59.162 [2024-07-23 00:21:13.791761] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:16:59.162 [2024-07-23 00:21:13.791770] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:16:59.162 [2024-07-23 00:21:13.791779] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:16:59.162 [2024-07-23 00:21:13.791789] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:16:59.162 [2024-07-23 00:21:13.791798] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:59.162 [2024-07-23 00:21:13.791807] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:16:59.162 [2024-07-23 00:21:13.791816] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:16:59.162 [2024-07-23 00:21:13.791827] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:59.162 [2024-07-23 00:21:13.791837] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:16:59.162 [2024-07-23 00:21:13.791846] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:16:59.162 [2024-07-23 00:21:13.791855] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:59.162 [2024-07-23 00:21:13.791865] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:16:59.162 [2024-07-23 00:21:13.791874] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:16:59.162 [2024-07-23 00:21:13.791883] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:59.162 [2024-07-23 00:21:13.791892] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:16:59.162 [2024-07-23 00:21:13.791901] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:16:59.162 [2024-07-23 00:21:13.791910] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:59.162 [2024-07-23 00:21:13.791919] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:16:59.162 [2024-07-23 00:21:13.791928] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:16:59.162 [2024-07-23 00:21:13.791936] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:59.162 [2024-07-23 00:21:13.791945] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:16:59.162 [2024-07-23 00:21:13.791955] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:16:59.162 [2024-07-23 00:21:13.791963] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:59.162 [2024-07-23 00:21:13.791977] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:16:59.162 [2024-07-23 00:21:13.791987] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:16:59.162 [2024-07-23 00:21:13.791995] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:59.162 [2024-07-23 00:21:13.792004] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:16:59.162 [2024-07-23 00:21:13.792013] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:16:59.162 [2024-07-23 00:21:13.792022] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:59.162 [2024-07-23 00:21:13.792031] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:16:59.162 [2024-07-23 00:21:13.792040] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:16:59.163 [2024-07-23 00:21:13.792048] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:59.163 [2024-07-23 00:21:13.792057] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:16:59.163 [2024-07-23 00:21:13.792066] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:16:59.163 [2024-07-23 00:21:13.792075] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:59.163 [2024-07-23 00:21:13.792083] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:16:59.163 [2024-07-23 00:21:13.792092] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:16:59.163 [2024-07-23 00:21:13.792102] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:59.163 [2024-07-23 00:21:13.792110] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:16:59.163 [2024-07-23 00:21:13.792122] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:16:59.163 [2024-07-23 00:21:13.792133] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:59.163 [2024-07-23 00:21:13.792142] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:59.163 [2024-07-23 00:21:13.792159] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:16:59.163 [2024-07-23 00:21:13.792168] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:16:59.163 [2024-07-23 00:21:13.792178] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:16:59.163 [2024-07-23 00:21:13.792187] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:16:59.163 [2024-07-23 00:21:13.792196] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:16:59.163 [2024-07-23 00:21:13.792205] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:16:59.163 [2024-07-23 00:21:13.792216] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:16:59.163 [2024-07-23 00:21:13.792228] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:59.163 [2024-07-23 00:21:13.792239] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:16:59.163 [2024-07-23 00:21:13.792249] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:16:59.163 [2024-07-23 00:21:13.792270] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:16:59.163 [2024-07-23 00:21:13.792281] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:16:59.163 [2024-07-23 00:21:13.792292] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:16:59.163 [2024-07-23 00:21:13.792304] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:16:59.163 [2024-07-23 00:21:13.792314] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:16:59.163 [2024-07-23 00:21:13.792325] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:16:59.163 [2024-07-23 00:21:13.792335] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:16:59.163 [2024-07-23 00:21:13.792346] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:16:59.163 [2024-07-23 00:21:13.792356] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:16:59.163 [2024-07-23 00:21:13.792366] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:16:59.163 [2024-07-23 00:21:13.792376] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:16:59.163 [2024-07-23 00:21:13.792387] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:16:59.163 [2024-07-23 00:21:13.792396] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:16:59.163 [2024-07-23 00:21:13.792410] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:59.163 [2024-07-23 00:21:13.792430] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:16:59.163 [2024-07-23 00:21:13.792440] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:16:59.163 [2024-07-23 00:21:13.792451] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:16:59.163 [2024-07-23 00:21:13.792461] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:16:59.163 [2024-07-23 00:21:13.792472] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:59.163 [2024-07-23 00:21:13.792492] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:16:59.163 [2024-07-23 00:21:13.792504] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.815 ms 00:16:59.163 [2024-07-23 00:21:13.792520] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:59.163 [2024-07-23 00:21:13.815792] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:59.163 [2024-07-23 00:21:13.815840] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:59.163 [2024-07-23 00:21:13.815858] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.253 ms 00:16:59.163 [2024-07-23 00:21:13.815883] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:59.163 [2024-07-23 00:21:13.816047] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:59.163 [2024-07-23 00:21:13.816064] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:16:59.163 [2024-07-23 00:21:13.816078] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:16:59.163 [2024-07-23 00:21:13.816091] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:59.163 [2024-07-23 00:21:13.827230] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:59.163 [2024-07-23 00:21:13.827287] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:59.163 [2024-07-23 00:21:13.827304] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.114 ms 00:16:59.163 [2024-07-23 00:21:13.827322] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:59.163 [2024-07-23 00:21:13.827410] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:59.163 [2024-07-23 00:21:13.827425] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:59.163 [2024-07-23 00:21:13.827439] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:16:59.163 [2024-07-23 00:21:13.827451] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:59.163 [2024-07-23 00:21:13.827911] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:59.163 [2024-07-23 00:21:13.827939] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:59.163 [2024-07-23 00:21:13.827953] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.435 ms 00:16:59.163 [2024-07-23 00:21:13.827975] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:59.163 [2024-07-23 00:21:13.828117] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:59.163 [2024-07-23 00:21:13.828132] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:59.163 [2024-07-23 00:21:13.828145] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.114 ms 00:16:59.163 [2024-07-23 00:21:13.828157] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:59.163 [2024-07-23 00:21:13.834762] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:59.163 [2024-07-23 00:21:13.834798] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:59.163 [2024-07-23 00:21:13.834811] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.579 ms 00:16:59.163 [2024-07-23 00:21:13.834821] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:59.163 [2024-07-23 00:21:13.837527] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:16:59.163 [2024-07-23 00:21:13.837562] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:16:59.163 [2024-07-23 00:21:13.837577] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:59.163 [2024-07-23 00:21:13.837592] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:16:59.163 [2024-07-23 00:21:13.837603] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.654 ms 00:16:59.163 [2024-07-23 00:21:13.837613] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:59.423 [2024-07-23 00:21:13.850458] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:59.423 [2024-07-23 00:21:13.850494] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:16:59.423 [2024-07-23 00:21:13.850508] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.797 ms 00:16:59.423 [2024-07-23 00:21:13.850524] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:59.423 [2024-07-23 00:21:13.852426] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:59.423 [2024-07-23 00:21:13.852453] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:16:59.423 [2024-07-23 00:21:13.852465] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.818 ms 00:16:59.423 [2024-07-23 00:21:13.852475] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:59.423 [2024-07-23 00:21:13.853924] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:59.423 [2024-07-23 00:21:13.853950] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:16:59.423 [2024-07-23 00:21:13.853961] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.406 ms 00:16:59.423 [2024-07-23 00:21:13.853971] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:59.423 [2024-07-23 00:21:13.854290] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:59.423 [2024-07-23 00:21:13.854313] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:16:59.423 [2024-07-23 00:21:13.854325] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.250 ms 00:16:59.423 [2024-07-23 00:21:13.854335] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:59.423 [2024-07-23 00:21:13.875290] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:59.423 [2024-07-23 00:21:13.875356] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:16:59.423 [2024-07-23 00:21:13.875373] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.959 ms 00:16:59.423 [2024-07-23 00:21:13.875383] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:59.423 [2024-07-23 00:21:13.881717] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:16:59.423 [2024-07-23 00:21:13.898250] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:59.423 [2024-07-23 00:21:13.898316] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:16:59.423 [2024-07-23 00:21:13.898331] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.801 ms 00:16:59.423 [2024-07-23 00:21:13.898341] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:59.423 [2024-07-23 00:21:13.898447] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:59.423 [2024-07-23 00:21:13.898461] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:16:59.423 [2024-07-23 00:21:13.898476] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:16:59.423 [2024-07-23 00:21:13.898487] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:59.423 [2024-07-23 00:21:13.898544] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:59.423 [2024-07-23 00:21:13.898555] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:16:59.423 [2024-07-23 00:21:13.898566] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:16:59.423 [2024-07-23 00:21:13.898576] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:59.423 [2024-07-23 00:21:13.898598] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:59.423 [2024-07-23 00:21:13.898609] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:16:59.423 [2024-07-23 00:21:13.898631] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:16:59.423 [2024-07-23 00:21:13.898644] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:59.423 [2024-07-23 00:21:13.898677] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:16:59.423 [2024-07-23 00:21:13.898689] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:59.423 [2024-07-23 00:21:13.898700] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:16:59.423 [2024-07-23 00:21:13.898710] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:16:59.423 [2024-07-23 00:21:13.898720] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:59.423 [2024-07-23 00:21:13.902376] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:59.423 [2024-07-23 00:21:13.902411] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:16:59.423 [2024-07-23 00:21:13.902424] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.640 ms 00:16:59.423 [2024-07-23 00:21:13.902440] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:59.423 [2024-07-23 00:21:13.902529] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:59.423 [2024-07-23 00:21:13.902542] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:16:59.423 [2024-07-23 00:21:13.902553] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:16:59.423 [2024-07-23 00:21:13.902563] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:59.423 [2024-07-23 00:21:13.903530] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:16:59.423 [2024-07-23 00:21:13.904484] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 128.265 ms, result 0 00:16:59.423 [2024-07-23 00:21:13.905179] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:16:59.423 [2024-07-23 00:21:13.914960] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:09.552  Copying: 28/256 [MB] (28 MBps) Copying: 52/256 [MB] (24 MBps) Copying: 78/256 [MB] (25 MBps) Copying: 104/256 [MB] (25 MBps) Copying: 129/256 [MB] (25 MBps) Copying: 155/256 [MB] (25 MBps) Copying: 181/256 [MB] (25 MBps) Copying: 206/256 [MB] (24 MBps) Copying: 231/256 [MB] (25 MBps) Copying: 256/256 [MB] (average 25 MBps)[2024-07-23 00:21:24.195643] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:09.552 [2024-07-23 00:21:24.198875] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:09.552 [2024-07-23 00:21:24.198975] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:09.552 [2024-07-23 00:21:24.199020] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:17:09.552 [2024-07-23 00:21:24.199053] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.552 [2024-07-23 00:21:24.199123] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:17:09.552 [2024-07-23 00:21:24.200001] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:09.552 [2024-07-23 00:21:24.200045] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:09.552 [2024-07-23 00:21:24.200067] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.833 ms 00:17:09.552 [2024-07-23 00:21:24.200088] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.552 [2024-07-23 00:21:24.200900] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:09.552 [2024-07-23 00:21:24.200944] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:09.552 [2024-07-23 00:21:24.200976] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.756 ms 00:17:09.552 [2024-07-23 00:21:24.200997] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.552 [2024-07-23 00:21:24.207894] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:09.552 [2024-07-23 00:21:24.207968] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:09.552 [2024-07-23 00:21:24.207992] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.871 ms 00:17:09.552 [2024-07-23 00:21:24.208012] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.552 [2024-07-23 00:21:24.217113] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:09.552 [2024-07-23 00:21:24.217194] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:17:09.552 [2024-07-23 00:21:24.217216] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.034 ms 00:17:09.552 [2024-07-23 00:21:24.217238] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.552 [2024-07-23 00:21:24.218772] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:09.552 [2024-07-23 00:21:24.218820] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:09.552 [2024-07-23 00:21:24.218838] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.463 ms 00:17:09.552 [2024-07-23 00:21:24.218851] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.552 [2024-07-23 00:21:24.222670] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:09.552 [2024-07-23 00:21:24.222709] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:09.552 [2024-07-23 00:21:24.222722] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.783 ms 00:17:09.552 [2024-07-23 00:21:24.222732] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.552 [2024-07-23 00:21:24.222847] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:09.553 [2024-07-23 00:21:24.222860] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:09.553 [2024-07-23 00:21:24.222883] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.082 ms 00:17:09.553 [2024-07-23 00:21:24.222893] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.553 [2024-07-23 00:21:24.224701] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:09.553 [2024-07-23 00:21:24.224736] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:17:09.553 [2024-07-23 00:21:24.224748] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.791 ms 00:17:09.553 [2024-07-23 00:21:24.224758] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.553 [2024-07-23 00:21:24.226306] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:09.553 [2024-07-23 00:21:24.226341] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:17:09.553 [2024-07-23 00:21:24.226353] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.519 ms 00:17:09.553 [2024-07-23 00:21:24.226363] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.553 [2024-07-23 00:21:24.227603] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:09.553 [2024-07-23 00:21:24.227638] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:09.553 [2024-07-23 00:21:24.227651] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.211 ms 00:17:09.553 [2024-07-23 00:21:24.227660] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.553 [2024-07-23 00:21:24.228852] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:09.553 [2024-07-23 00:21:24.228887] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:09.553 [2024-07-23 00:21:24.228898] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.137 ms 00:17:09.553 [2024-07-23 00:21:24.228907] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.553 [2024-07-23 00:21:24.228937] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:09.553 [2024-07-23 00:21:24.228954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:09.553 [2024-07-23 00:21:24.228978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:09.553 [2024-07-23 00:21:24.228989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:09.553 [2024-07-23 00:21:24.229000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:09.553 [2024-07-23 00:21:24.229011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:09.553 [2024-07-23 00:21:24.229022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:09.553 [2024-07-23 00:21:24.229032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:09.553 [2024-07-23 00:21:24.229043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:09.553 [2024-07-23 00:21:24.229054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:09.553 [2024-07-23 00:21:24.229080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:09.553 [2024-07-23 00:21:24.229099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:09.553 [2024-07-23 00:21:24.229116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:09.553 [2024-07-23 00:21:24.229134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:09.553 [2024-07-23 00:21:24.229152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:09.553 [2024-07-23 00:21:24.229170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:09.553 [2024-07-23 00:21:24.229188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:09.553 [2024-07-23 00:21:24.229206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:09.553 [2024-07-23 00:21:24.229218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:09.553 [2024-07-23 00:21:24.229228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:09.553 [2024-07-23 00:21:24.229239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:09.553 [2024-07-23 00:21:24.229249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:09.553 [2024-07-23 00:21:24.229275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:09.553 [2024-07-23 00:21:24.229287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:09.553 [2024-07-23 00:21:24.229297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:09.553 [2024-07-23 00:21:24.229308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:09.553 [2024-07-23 00:21:24.229319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:09.553 [2024-07-23 00:21:24.229331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:09.553 [2024-07-23 00:21:24.229341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:09.553 [2024-07-23 00:21:24.229352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:09.553 [2024-07-23 00:21:24.229363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:09.553 [2024-07-23 00:21:24.229374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:09.553 [2024-07-23 00:21:24.229384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:09.553 [2024-07-23 00:21:24.229395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:09.553 [2024-07-23 00:21:24.229407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:09.553 [2024-07-23 00:21:24.229417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:09.553 [2024-07-23 00:21:24.229428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:09.553 [2024-07-23 00:21:24.229438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:09.553 [2024-07-23 00:21:24.229450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:09.553 [2024-07-23 00:21:24.229461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:09.553 [2024-07-23 00:21:24.229471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:09.553 [2024-07-23 00:21:24.229482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:09.553 [2024-07-23 00:21:24.229493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:09.553 [2024-07-23 00:21:24.229515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:09.553 [2024-07-23 00:21:24.229525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:09.553 [2024-07-23 00:21:24.229536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:09.553 [2024-07-23 00:21:24.229547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:09.553 [2024-07-23 00:21:24.229558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:09.553 [2024-07-23 00:21:24.229568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:09.553 [2024-07-23 00:21:24.229579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:09.553 [2024-07-23 00:21:24.229589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:09.553 [2024-07-23 00:21:24.229600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:09.553 [2024-07-23 00:21:24.229610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:09.553 [2024-07-23 00:21:24.229621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:09.553 [2024-07-23 00:21:24.229631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:09.553 [2024-07-23 00:21:24.229642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:09.553 [2024-07-23 00:21:24.229652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:09.553 [2024-07-23 00:21:24.229663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:09.553 [2024-07-23 00:21:24.229673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:09.553 [2024-07-23 00:21:24.229683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:09.553 [2024-07-23 00:21:24.229693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:09.553 [2024-07-23 00:21:24.229703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:09.553 [2024-07-23 00:21:24.229713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:09.553 [2024-07-23 00:21:24.229724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:09.553 [2024-07-23 00:21:24.229734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:09.553 [2024-07-23 00:21:24.229745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:09.553 [2024-07-23 00:21:24.229756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:09.553 [2024-07-23 00:21:24.229766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:09.553 [2024-07-23 00:21:24.229777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:09.553 [2024-07-23 00:21:24.229787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:09.553 [2024-07-23 00:21:24.229797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:09.553 [2024-07-23 00:21:24.229807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:09.553 [2024-07-23 00:21:24.229818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:09.554 [2024-07-23 00:21:24.229828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:09.554 [2024-07-23 00:21:24.229839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:09.554 [2024-07-23 00:21:24.229850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:09.554 [2024-07-23 00:21:24.229860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:09.554 [2024-07-23 00:21:24.229870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:09.554 [2024-07-23 00:21:24.229881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:09.554 [2024-07-23 00:21:24.229891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:09.554 [2024-07-23 00:21:24.229901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:09.554 [2024-07-23 00:21:24.229911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:09.554 [2024-07-23 00:21:24.229922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:09.554 [2024-07-23 00:21:24.229933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:09.554 [2024-07-23 00:21:24.229943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:09.554 [2024-07-23 00:21:24.229953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:09.554 [2024-07-23 00:21:24.229963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:09.554 [2024-07-23 00:21:24.229973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:09.554 [2024-07-23 00:21:24.229984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:09.554 [2024-07-23 00:21:24.229994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:09.554 [2024-07-23 00:21:24.230005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:09.554 [2024-07-23 00:21:24.230015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:09.554 [2024-07-23 00:21:24.230026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:09.554 [2024-07-23 00:21:24.230036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:09.554 [2024-07-23 00:21:24.230047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:09.554 [2024-07-23 00:21:24.230058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:09.554 [2024-07-23 00:21:24.230068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:09.554 [2024-07-23 00:21:24.230079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:09.554 [2024-07-23 00:21:24.230090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:09.554 [2024-07-23 00:21:24.230101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:09.554 [2024-07-23 00:21:24.230111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:09.554 [2024-07-23 00:21:24.230129] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:09.554 [2024-07-23 00:21:24.230139] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 05fcf3be-df9c-41ef-ae3d-41db770ccbb0 00:17:09.554 [2024-07-23 00:21:24.230149] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:09.554 [2024-07-23 00:21:24.230159] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:09.554 [2024-07-23 00:21:24.230169] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:09.554 [2024-07-23 00:21:24.230179] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:09.554 [2024-07-23 00:21:24.230195] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:09.554 [2024-07-23 00:21:24.230210] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:09.554 [2024-07-23 00:21:24.230220] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:09.554 [2024-07-23 00:21:24.230229] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:09.554 [2024-07-23 00:21:24.230238] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:09.554 [2024-07-23 00:21:24.230248] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:09.554 [2024-07-23 00:21:24.230272] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:09.554 [2024-07-23 00:21:24.230297] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.314 ms 00:17:09.554 [2024-07-23 00:21:24.230311] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.554 [2024-07-23 00:21:24.232018] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:09.554 [2024-07-23 00:21:24.232039] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:09.554 [2024-07-23 00:21:24.232050] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.687 ms 00:17:09.554 [2024-07-23 00:21:24.232072] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.554 [2024-07-23 00:21:24.232189] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:09.554 [2024-07-23 00:21:24.232201] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:09.554 [2024-07-23 00:21:24.232212] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.089 ms 00:17:09.554 [2024-07-23 00:21:24.232222] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.813 [2024-07-23 00:21:24.238570] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:09.813 [2024-07-23 00:21:24.238596] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:09.813 [2024-07-23 00:21:24.238621] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:09.813 [2024-07-23 00:21:24.238631] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.813 [2024-07-23 00:21:24.238697] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:09.813 [2024-07-23 00:21:24.238709] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:09.813 [2024-07-23 00:21:24.238719] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:09.813 [2024-07-23 00:21:24.238730] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.813 [2024-07-23 00:21:24.238778] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:09.814 [2024-07-23 00:21:24.238791] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:09.814 [2024-07-23 00:21:24.238801] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:09.814 [2024-07-23 00:21:24.238818] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.814 [2024-07-23 00:21:24.238841] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:09.814 [2024-07-23 00:21:24.238852] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:09.814 [2024-07-23 00:21:24.238868] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:09.814 [2024-07-23 00:21:24.238879] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.814 [2024-07-23 00:21:24.250798] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:09.814 [2024-07-23 00:21:24.250841] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:09.814 [2024-07-23 00:21:24.250854] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:09.814 [2024-07-23 00:21:24.250893] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.814 [2024-07-23 00:21:24.259164] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:09.814 [2024-07-23 00:21:24.259201] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:09.814 [2024-07-23 00:21:24.259215] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:09.814 [2024-07-23 00:21:24.259225] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.814 [2024-07-23 00:21:24.259255] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:09.814 [2024-07-23 00:21:24.259279] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:09.814 [2024-07-23 00:21:24.259289] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:09.814 [2024-07-23 00:21:24.259315] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.814 [2024-07-23 00:21:24.259351] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:09.814 [2024-07-23 00:21:24.259362] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:09.814 [2024-07-23 00:21:24.259373] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:09.814 [2024-07-23 00:21:24.259391] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.814 [2024-07-23 00:21:24.259468] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:09.814 [2024-07-23 00:21:24.259480] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:09.814 [2024-07-23 00:21:24.259491] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:09.814 [2024-07-23 00:21:24.259501] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.814 [2024-07-23 00:21:24.259540] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:09.814 [2024-07-23 00:21:24.259556] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:09.814 [2024-07-23 00:21:24.259567] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:09.814 [2024-07-23 00:21:24.259583] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.814 [2024-07-23 00:21:24.259624] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:09.814 [2024-07-23 00:21:24.259642] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:09.814 [2024-07-23 00:21:24.259653] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:09.814 [2024-07-23 00:21:24.259663] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.814 [2024-07-23 00:21:24.259710] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:09.814 [2024-07-23 00:21:24.259722] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:09.814 [2024-07-23 00:21:24.259732] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:09.814 [2024-07-23 00:21:24.259752] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.814 [2024-07-23 00:21:24.259892] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 61.119 ms, result 0 00:17:09.814 00:17:09.814 00:17:10.073 00:21:24 ftl.ftl_trim -- ftl/trim.sh@106 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:17:10.332 /home/vagrant/spdk_repo/spdk/test/ftl/data: OK 00:17:10.332 00:21:24 ftl.ftl_trim -- ftl/trim.sh@108 -- # trap - SIGINT SIGTERM EXIT 00:17:10.332 00:21:24 ftl.ftl_trim -- ftl/trim.sh@109 -- # fio_kill 00:17:10.332 00:21:24 ftl.ftl_trim -- ftl/trim.sh@15 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:17:10.332 00:21:24 ftl.ftl_trim -- ftl/trim.sh@16 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:10.332 00:21:24 ftl.ftl_trim -- ftl/trim.sh@17 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/random_pattern 00:17:10.592 00:21:25 ftl.ftl_trim -- ftl/trim.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/data 00:17:10.592 Process with pid 89191 is not found 00:17:10.592 00:21:25 ftl.ftl_trim -- ftl/trim.sh@20 -- # killprocess 89191 00:17:10.592 00:21:25 ftl.ftl_trim -- common/autotest_common.sh@946 -- # '[' -z 89191 ']' 00:17:10.592 00:21:25 ftl.ftl_trim -- common/autotest_common.sh@950 -- # kill -0 89191 00:17:10.592 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 950: kill: (89191) - No such process 00:17:10.592 00:21:25 ftl.ftl_trim -- common/autotest_common.sh@973 -- # echo 'Process with pid 89191 is not found' 00:17:10.592 ************************************ 00:17:10.592 END TEST ftl_trim 00:17:10.592 ************************************ 00:17:10.592 00:17:10.592 real 0m52.315s 00:17:10.592 user 1m12.500s 00:17:10.592 sys 0m5.844s 00:17:10.592 00:21:25 ftl.ftl_trim -- common/autotest_common.sh@1122 -- # xtrace_disable 00:17:10.592 00:21:25 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:17:10.592 00:21:25 ftl -- ftl/ftl.sh@76 -- # run_test ftl_restore /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:17:10.592 00:21:25 ftl -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:17:10.592 00:21:25 ftl -- common/autotest_common.sh@1103 -- # xtrace_disable 00:17:10.592 00:21:25 ftl -- common/autotest_common.sh@10 -- # set +x 00:17:10.592 ************************************ 00:17:10.592 START TEST ftl_restore 00:17:10.592 ************************************ 00:17:10.592 00:21:25 ftl.ftl_restore -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:17:10.592 * Looking for test storage... 00:17:10.852 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:17:10.852 00:21:25 ftl.ftl_restore -- ftl/restore.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:17:10.852 00:21:25 ftl.ftl_restore -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh 00:17:10.852 00:21:25 ftl.ftl_restore -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:17:10.852 00:21:25 ftl.ftl_restore -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:17:10.852 00:21:25 ftl.ftl_restore -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:17:10.852 00:21:25 ftl.ftl_restore -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:17:10.852 00:21:25 ftl.ftl_restore -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:10.852 00:21:25 ftl.ftl_restore -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:17:10.852 00:21:25 ftl.ftl_restore -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:17:10.852 00:21:25 ftl.ftl_restore -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:10.852 00:21:25 ftl.ftl_restore -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:10.852 00:21:25 ftl.ftl_restore -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:17:10.852 00:21:25 ftl.ftl_restore -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:17:10.852 00:21:25 ftl.ftl_restore -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:17:10.852 00:21:25 ftl.ftl_restore -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:17:10.852 00:21:25 ftl.ftl_restore -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:17:10.852 00:21:25 ftl.ftl_restore -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:17:10.852 00:21:25 ftl.ftl_restore -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:10.852 00:21:25 ftl.ftl_restore -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:10.852 00:21:25 ftl.ftl_restore -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:17:10.852 00:21:25 ftl.ftl_restore -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:17:10.852 00:21:25 ftl.ftl_restore -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:17:10.852 00:21:25 ftl.ftl_restore -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:17:10.852 00:21:25 ftl.ftl_restore -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:17:10.852 00:21:25 ftl.ftl_restore -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:17:10.852 00:21:25 ftl.ftl_restore -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:17:10.852 00:21:25 ftl.ftl_restore -- ftl/common.sh@23 -- # spdk_ini_pid= 00:17:10.852 00:21:25 ftl.ftl_restore -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:10.852 00:21:25 ftl.ftl_restore -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:10.852 00:21:25 ftl.ftl_restore -- ftl/restore.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:10.852 00:21:25 ftl.ftl_restore -- ftl/restore.sh@13 -- # mktemp -d 00:17:10.852 00:21:25 ftl.ftl_restore -- ftl/restore.sh@13 -- # mount_dir=/tmp/tmp.S0sF3Nz9Rf 00:17:10.852 00:21:25 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:17:10.852 00:21:25 ftl.ftl_restore -- ftl/restore.sh@16 -- # case $opt in 00:17:10.852 00:21:25 ftl.ftl_restore -- ftl/restore.sh@18 -- # nv_cache=0000:00:10.0 00:17:10.852 00:21:25 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:17:10.852 00:21:25 ftl.ftl_restore -- ftl/restore.sh@23 -- # shift 2 00:17:10.852 00:21:25 ftl.ftl_restore -- ftl/restore.sh@24 -- # device=0000:00:11.0 00:17:10.852 00:21:25 ftl.ftl_restore -- ftl/restore.sh@25 -- # timeout=240 00:17:10.852 00:21:25 ftl.ftl_restore -- ftl/restore.sh@36 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:17:10.852 00:21:25 ftl.ftl_restore -- ftl/restore.sh@39 -- # svcpid=89417 00:17:10.852 00:21:25 ftl.ftl_restore -- ftl/restore.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:10.852 00:21:25 ftl.ftl_restore -- ftl/restore.sh@41 -- # waitforlisten 89417 00:17:10.852 00:21:25 ftl.ftl_restore -- common/autotest_common.sh@827 -- # '[' -z 89417 ']' 00:17:10.852 00:21:25 ftl.ftl_restore -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:10.852 00:21:25 ftl.ftl_restore -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:10.852 00:21:25 ftl.ftl_restore -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:10.852 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:10.852 00:21:25 ftl.ftl_restore -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:10.852 00:21:25 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:17:10.852 [2024-07-23 00:21:25.419387] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:17:10.852 [2024-07-23 00:21:25.419728] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89417 ] 00:17:11.112 [2024-07-23 00:21:25.571710] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:11.112 [2024-07-23 00:21:25.619062] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:11.681 00:21:26 ftl.ftl_restore -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:17:11.681 00:21:26 ftl.ftl_restore -- common/autotest_common.sh@860 -- # return 0 00:17:11.681 00:21:26 ftl.ftl_restore -- ftl/restore.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:17:11.681 00:21:26 ftl.ftl_restore -- ftl/common.sh@54 -- # local name=nvme0 00:17:11.681 00:21:26 ftl.ftl_restore -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:17:11.681 00:21:26 ftl.ftl_restore -- ftl/common.sh@56 -- # local size=103424 00:17:11.681 00:21:26 ftl.ftl_restore -- ftl/common.sh@59 -- # local base_bdev 00:17:11.681 00:21:26 ftl.ftl_restore -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:17:11.940 00:21:26 ftl.ftl_restore -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:17:11.940 00:21:26 ftl.ftl_restore -- ftl/common.sh@62 -- # local base_size 00:17:11.940 00:21:26 ftl.ftl_restore -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:17:11.940 00:21:26 ftl.ftl_restore -- common/autotest_common.sh@1374 -- # local bdev_name=nvme0n1 00:17:11.940 00:21:26 ftl.ftl_restore -- common/autotest_common.sh@1375 -- # local bdev_info 00:17:11.940 00:21:26 ftl.ftl_restore -- common/autotest_common.sh@1376 -- # local bs 00:17:11.940 00:21:26 ftl.ftl_restore -- common/autotest_common.sh@1377 -- # local nb 00:17:11.940 00:21:26 ftl.ftl_restore -- common/autotest_common.sh@1378 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:17:12.200 00:21:26 ftl.ftl_restore -- common/autotest_common.sh@1378 -- # bdev_info='[ 00:17:12.200 { 00:17:12.200 "name": "nvme0n1", 00:17:12.200 "aliases": [ 00:17:12.200 "8d8e08f4-eab6-4aaf-8840-c63915dfb8e4" 00:17:12.200 ], 00:17:12.200 "product_name": "NVMe disk", 00:17:12.200 "block_size": 4096, 00:17:12.200 "num_blocks": 1310720, 00:17:12.200 "uuid": "8d8e08f4-eab6-4aaf-8840-c63915dfb8e4", 00:17:12.200 "assigned_rate_limits": { 00:17:12.200 "rw_ios_per_sec": 0, 00:17:12.200 "rw_mbytes_per_sec": 0, 00:17:12.200 "r_mbytes_per_sec": 0, 00:17:12.200 "w_mbytes_per_sec": 0 00:17:12.200 }, 00:17:12.200 "claimed": true, 00:17:12.200 "claim_type": "read_many_write_one", 00:17:12.200 "zoned": false, 00:17:12.200 "supported_io_types": { 00:17:12.200 "read": true, 00:17:12.200 "write": true, 00:17:12.200 "unmap": true, 00:17:12.200 "write_zeroes": true, 00:17:12.200 "flush": true, 00:17:12.200 "reset": true, 00:17:12.200 "compare": true, 00:17:12.200 "compare_and_write": false, 00:17:12.200 "abort": true, 00:17:12.200 "nvme_admin": true, 00:17:12.200 "nvme_io": true 00:17:12.200 }, 00:17:12.200 "driver_specific": { 00:17:12.200 "nvme": [ 00:17:12.200 { 00:17:12.200 "pci_address": "0000:00:11.0", 00:17:12.200 "trid": { 00:17:12.200 "trtype": "PCIe", 00:17:12.200 "traddr": "0000:00:11.0" 00:17:12.200 }, 00:17:12.200 "ctrlr_data": { 00:17:12.200 "cntlid": 0, 00:17:12.200 "vendor_id": "0x1b36", 00:17:12.200 "model_number": "QEMU NVMe Ctrl", 00:17:12.200 "serial_number": "12341", 00:17:12.200 "firmware_revision": "8.0.0", 00:17:12.200 "subnqn": "nqn.2019-08.org.qemu:12341", 00:17:12.200 "oacs": { 00:17:12.200 "security": 0, 00:17:12.200 "format": 1, 00:17:12.200 "firmware": 0, 00:17:12.200 "ns_manage": 1 00:17:12.200 }, 00:17:12.200 "multi_ctrlr": false, 00:17:12.200 "ana_reporting": false 00:17:12.200 }, 00:17:12.200 "vs": { 00:17:12.200 "nvme_version": "1.4" 00:17:12.200 }, 00:17:12.200 "ns_data": { 00:17:12.200 "id": 1, 00:17:12.200 "can_share": false 00:17:12.200 } 00:17:12.200 } 00:17:12.200 ], 00:17:12.200 "mp_policy": "active_passive" 00:17:12.200 } 00:17:12.200 } 00:17:12.200 ]' 00:17:12.200 00:21:26 ftl.ftl_restore -- common/autotest_common.sh@1379 -- # jq '.[] .block_size' 00:17:12.200 00:21:26 ftl.ftl_restore -- common/autotest_common.sh@1379 -- # bs=4096 00:17:12.200 00:21:26 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # jq '.[] .num_blocks' 00:17:12.200 00:21:26 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # nb=1310720 00:17:12.200 00:21:26 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # bdev_size=5120 00:17:12.200 00:21:26 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # echo 5120 00:17:12.200 00:21:26 ftl.ftl_restore -- ftl/common.sh@63 -- # base_size=5120 00:17:12.200 00:21:26 ftl.ftl_restore -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:17:12.200 00:21:26 ftl.ftl_restore -- ftl/common.sh@67 -- # clear_lvols 00:17:12.200 00:21:26 ftl.ftl_restore -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:17:12.200 00:21:26 ftl.ftl_restore -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:17:12.459 00:21:26 ftl.ftl_restore -- ftl/common.sh@28 -- # stores=3cedc874-5b09-4356-92c4-040f38f29b9b 00:17:12.459 00:21:26 ftl.ftl_restore -- ftl/common.sh@29 -- # for lvs in $stores 00:17:12.460 00:21:26 ftl.ftl_restore -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 3cedc874-5b09-4356-92c4-040f38f29b9b 00:17:12.719 00:21:27 ftl.ftl_restore -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:17:12.719 00:21:27 ftl.ftl_restore -- ftl/common.sh@68 -- # lvs=12bf6442-ea37-4f79-8a0b-acfa0cb650de 00:17:12.719 00:21:27 ftl.ftl_restore -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 12bf6442-ea37-4f79-8a0b-acfa0cb650de 00:17:12.978 00:21:27 ftl.ftl_restore -- ftl/restore.sh@43 -- # split_bdev=6d55ca19-7bc3-4929-8cd5-7b6e2c1520db 00:17:12.978 00:21:27 ftl.ftl_restore -- ftl/restore.sh@44 -- # '[' -n 0000:00:10.0 ']' 00:17:12.978 00:21:27 ftl.ftl_restore -- ftl/restore.sh@45 -- # create_nv_cache_bdev nvc0 0000:00:10.0 6d55ca19-7bc3-4929-8cd5-7b6e2c1520db 00:17:12.978 00:21:27 ftl.ftl_restore -- ftl/common.sh@35 -- # local name=nvc0 00:17:12.978 00:21:27 ftl.ftl_restore -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:17:12.978 00:21:27 ftl.ftl_restore -- ftl/common.sh@37 -- # local base_bdev=6d55ca19-7bc3-4929-8cd5-7b6e2c1520db 00:17:12.978 00:21:27 ftl.ftl_restore -- ftl/common.sh@38 -- # local cache_size= 00:17:12.978 00:21:27 ftl.ftl_restore -- ftl/common.sh@41 -- # get_bdev_size 6d55ca19-7bc3-4929-8cd5-7b6e2c1520db 00:17:12.978 00:21:27 ftl.ftl_restore -- common/autotest_common.sh@1374 -- # local bdev_name=6d55ca19-7bc3-4929-8cd5-7b6e2c1520db 00:17:12.978 00:21:27 ftl.ftl_restore -- common/autotest_common.sh@1375 -- # local bdev_info 00:17:12.978 00:21:27 ftl.ftl_restore -- common/autotest_common.sh@1376 -- # local bs 00:17:12.978 00:21:27 ftl.ftl_restore -- common/autotest_common.sh@1377 -- # local nb 00:17:12.978 00:21:27 ftl.ftl_restore -- common/autotest_common.sh@1378 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 6d55ca19-7bc3-4929-8cd5-7b6e2c1520db 00:17:13.238 00:21:27 ftl.ftl_restore -- common/autotest_common.sh@1378 -- # bdev_info='[ 00:17:13.238 { 00:17:13.238 "name": "6d55ca19-7bc3-4929-8cd5-7b6e2c1520db", 00:17:13.238 "aliases": [ 00:17:13.238 "lvs/nvme0n1p0" 00:17:13.238 ], 00:17:13.238 "product_name": "Logical Volume", 00:17:13.238 "block_size": 4096, 00:17:13.238 "num_blocks": 26476544, 00:17:13.238 "uuid": "6d55ca19-7bc3-4929-8cd5-7b6e2c1520db", 00:17:13.238 "assigned_rate_limits": { 00:17:13.238 "rw_ios_per_sec": 0, 00:17:13.238 "rw_mbytes_per_sec": 0, 00:17:13.238 "r_mbytes_per_sec": 0, 00:17:13.238 "w_mbytes_per_sec": 0 00:17:13.238 }, 00:17:13.238 "claimed": false, 00:17:13.238 "zoned": false, 00:17:13.238 "supported_io_types": { 00:17:13.238 "read": true, 00:17:13.238 "write": true, 00:17:13.238 "unmap": true, 00:17:13.238 "write_zeroes": true, 00:17:13.238 "flush": false, 00:17:13.238 "reset": true, 00:17:13.238 "compare": false, 00:17:13.238 "compare_and_write": false, 00:17:13.238 "abort": false, 00:17:13.238 "nvme_admin": false, 00:17:13.238 "nvme_io": false 00:17:13.238 }, 00:17:13.238 "driver_specific": { 00:17:13.238 "lvol": { 00:17:13.238 "lvol_store_uuid": "12bf6442-ea37-4f79-8a0b-acfa0cb650de", 00:17:13.238 "base_bdev": "nvme0n1", 00:17:13.238 "thin_provision": true, 00:17:13.238 "num_allocated_clusters": 0, 00:17:13.238 "snapshot": false, 00:17:13.238 "clone": false, 00:17:13.238 "esnap_clone": false 00:17:13.238 } 00:17:13.238 } 00:17:13.238 } 00:17:13.238 ]' 00:17:13.238 00:21:27 ftl.ftl_restore -- common/autotest_common.sh@1379 -- # jq '.[] .block_size' 00:17:13.238 00:21:27 ftl.ftl_restore -- common/autotest_common.sh@1379 -- # bs=4096 00:17:13.238 00:21:27 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # jq '.[] .num_blocks' 00:17:13.238 00:21:27 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # nb=26476544 00:17:13.238 00:21:27 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # bdev_size=103424 00:17:13.238 00:21:27 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # echo 103424 00:17:13.238 00:21:27 ftl.ftl_restore -- ftl/common.sh@41 -- # local base_size=5171 00:17:13.238 00:21:27 ftl.ftl_restore -- ftl/common.sh@44 -- # local nvc_bdev 00:17:13.238 00:21:27 ftl.ftl_restore -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:17:13.497 00:21:28 ftl.ftl_restore -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:17:13.497 00:21:28 ftl.ftl_restore -- ftl/common.sh@47 -- # [[ -z '' ]] 00:17:13.497 00:21:28 ftl.ftl_restore -- ftl/common.sh@48 -- # get_bdev_size 6d55ca19-7bc3-4929-8cd5-7b6e2c1520db 00:17:13.497 00:21:28 ftl.ftl_restore -- common/autotest_common.sh@1374 -- # local bdev_name=6d55ca19-7bc3-4929-8cd5-7b6e2c1520db 00:17:13.497 00:21:28 ftl.ftl_restore -- common/autotest_common.sh@1375 -- # local bdev_info 00:17:13.497 00:21:28 ftl.ftl_restore -- common/autotest_common.sh@1376 -- # local bs 00:17:13.497 00:21:28 ftl.ftl_restore -- common/autotest_common.sh@1377 -- # local nb 00:17:13.497 00:21:28 ftl.ftl_restore -- common/autotest_common.sh@1378 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 6d55ca19-7bc3-4929-8cd5-7b6e2c1520db 00:17:13.756 00:21:28 ftl.ftl_restore -- common/autotest_common.sh@1378 -- # bdev_info='[ 00:17:13.756 { 00:17:13.756 "name": "6d55ca19-7bc3-4929-8cd5-7b6e2c1520db", 00:17:13.756 "aliases": [ 00:17:13.756 "lvs/nvme0n1p0" 00:17:13.756 ], 00:17:13.756 "product_name": "Logical Volume", 00:17:13.756 "block_size": 4096, 00:17:13.756 "num_blocks": 26476544, 00:17:13.756 "uuid": "6d55ca19-7bc3-4929-8cd5-7b6e2c1520db", 00:17:13.756 "assigned_rate_limits": { 00:17:13.756 "rw_ios_per_sec": 0, 00:17:13.756 "rw_mbytes_per_sec": 0, 00:17:13.756 "r_mbytes_per_sec": 0, 00:17:13.756 "w_mbytes_per_sec": 0 00:17:13.756 }, 00:17:13.756 "claimed": false, 00:17:13.756 "zoned": false, 00:17:13.756 "supported_io_types": { 00:17:13.756 "read": true, 00:17:13.756 "write": true, 00:17:13.756 "unmap": true, 00:17:13.756 "write_zeroes": true, 00:17:13.756 "flush": false, 00:17:13.756 "reset": true, 00:17:13.756 "compare": false, 00:17:13.756 "compare_and_write": false, 00:17:13.756 "abort": false, 00:17:13.756 "nvme_admin": false, 00:17:13.756 "nvme_io": false 00:17:13.756 }, 00:17:13.756 "driver_specific": { 00:17:13.756 "lvol": { 00:17:13.756 "lvol_store_uuid": "12bf6442-ea37-4f79-8a0b-acfa0cb650de", 00:17:13.756 "base_bdev": "nvme0n1", 00:17:13.756 "thin_provision": true, 00:17:13.756 "num_allocated_clusters": 0, 00:17:13.756 "snapshot": false, 00:17:13.756 "clone": false, 00:17:13.756 "esnap_clone": false 00:17:13.756 } 00:17:13.756 } 00:17:13.756 } 00:17:13.756 ]' 00:17:13.756 00:21:28 ftl.ftl_restore -- common/autotest_common.sh@1379 -- # jq '.[] .block_size' 00:17:13.756 00:21:28 ftl.ftl_restore -- common/autotest_common.sh@1379 -- # bs=4096 00:17:13.756 00:21:28 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # jq '.[] .num_blocks' 00:17:13.756 00:21:28 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # nb=26476544 00:17:13.756 00:21:28 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # bdev_size=103424 00:17:13.756 00:21:28 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # echo 103424 00:17:13.756 00:21:28 ftl.ftl_restore -- ftl/common.sh@48 -- # cache_size=5171 00:17:13.756 00:21:28 ftl.ftl_restore -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:17:14.015 00:21:28 ftl.ftl_restore -- ftl/restore.sh@45 -- # nvc_bdev=nvc0n1p0 00:17:14.015 00:21:28 ftl.ftl_restore -- ftl/restore.sh@48 -- # get_bdev_size 6d55ca19-7bc3-4929-8cd5-7b6e2c1520db 00:17:14.015 00:21:28 ftl.ftl_restore -- common/autotest_common.sh@1374 -- # local bdev_name=6d55ca19-7bc3-4929-8cd5-7b6e2c1520db 00:17:14.015 00:21:28 ftl.ftl_restore -- common/autotest_common.sh@1375 -- # local bdev_info 00:17:14.015 00:21:28 ftl.ftl_restore -- common/autotest_common.sh@1376 -- # local bs 00:17:14.015 00:21:28 ftl.ftl_restore -- common/autotest_common.sh@1377 -- # local nb 00:17:14.015 00:21:28 ftl.ftl_restore -- common/autotest_common.sh@1378 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 6d55ca19-7bc3-4929-8cd5-7b6e2c1520db 00:17:14.274 00:21:28 ftl.ftl_restore -- common/autotest_common.sh@1378 -- # bdev_info='[ 00:17:14.274 { 00:17:14.274 "name": "6d55ca19-7bc3-4929-8cd5-7b6e2c1520db", 00:17:14.274 "aliases": [ 00:17:14.274 "lvs/nvme0n1p0" 00:17:14.274 ], 00:17:14.274 "product_name": "Logical Volume", 00:17:14.274 "block_size": 4096, 00:17:14.274 "num_blocks": 26476544, 00:17:14.274 "uuid": "6d55ca19-7bc3-4929-8cd5-7b6e2c1520db", 00:17:14.274 "assigned_rate_limits": { 00:17:14.274 "rw_ios_per_sec": 0, 00:17:14.274 "rw_mbytes_per_sec": 0, 00:17:14.274 "r_mbytes_per_sec": 0, 00:17:14.274 "w_mbytes_per_sec": 0 00:17:14.274 }, 00:17:14.274 "claimed": false, 00:17:14.274 "zoned": false, 00:17:14.274 "supported_io_types": { 00:17:14.274 "read": true, 00:17:14.274 "write": true, 00:17:14.274 "unmap": true, 00:17:14.274 "write_zeroes": true, 00:17:14.274 "flush": false, 00:17:14.274 "reset": true, 00:17:14.274 "compare": false, 00:17:14.274 "compare_and_write": false, 00:17:14.274 "abort": false, 00:17:14.274 "nvme_admin": false, 00:17:14.274 "nvme_io": false 00:17:14.274 }, 00:17:14.274 "driver_specific": { 00:17:14.274 "lvol": { 00:17:14.274 "lvol_store_uuid": "12bf6442-ea37-4f79-8a0b-acfa0cb650de", 00:17:14.274 "base_bdev": "nvme0n1", 00:17:14.274 "thin_provision": true, 00:17:14.274 "num_allocated_clusters": 0, 00:17:14.274 "snapshot": false, 00:17:14.274 "clone": false, 00:17:14.274 "esnap_clone": false 00:17:14.274 } 00:17:14.274 } 00:17:14.274 } 00:17:14.274 ]' 00:17:14.274 00:21:28 ftl.ftl_restore -- common/autotest_common.sh@1379 -- # jq '.[] .block_size' 00:17:14.274 00:21:28 ftl.ftl_restore -- common/autotest_common.sh@1379 -- # bs=4096 00:17:14.274 00:21:28 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # jq '.[] .num_blocks' 00:17:14.274 00:21:28 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # nb=26476544 00:17:14.274 00:21:28 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # bdev_size=103424 00:17:14.274 00:21:28 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # echo 103424 00:17:14.274 00:21:28 ftl.ftl_restore -- ftl/restore.sh@48 -- # l2p_dram_size_mb=10 00:17:14.274 00:21:28 ftl.ftl_restore -- ftl/restore.sh@49 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 6d55ca19-7bc3-4929-8cd5-7b6e2c1520db --l2p_dram_limit 10' 00:17:14.274 00:21:28 ftl.ftl_restore -- ftl/restore.sh@51 -- # '[' -n '' ']' 00:17:14.274 00:21:28 ftl.ftl_restore -- ftl/restore.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:17:14.274 00:21:28 ftl.ftl_restore -- ftl/restore.sh@52 -- # ftl_construct_args+=' -c nvc0n1p0' 00:17:14.274 00:21:28 ftl.ftl_restore -- ftl/restore.sh@54 -- # '[' '' -eq 1 ']' 00:17:14.274 /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh: line 54: [: : integer expression expected 00:17:14.274 00:21:28 ftl.ftl_restore -- ftl/restore.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 6d55ca19-7bc3-4929-8cd5-7b6e2c1520db --l2p_dram_limit 10 -c nvc0n1p0 00:17:14.535 [2024-07-23 00:21:28.977431] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:14.535 [2024-07-23 00:21:28.977481] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:14.535 [2024-07-23 00:21:28.977501] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:17:14.535 [2024-07-23 00:21:28.977512] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:14.535 [2024-07-23 00:21:28.977580] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:14.535 [2024-07-23 00:21:28.977593] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:14.535 [2024-07-23 00:21:28.977606] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:17:14.535 [2024-07-23 00:21:28.977619] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:14.535 [2024-07-23 00:21:28.977647] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:14.535 [2024-07-23 00:21:28.977926] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:14.535 [2024-07-23 00:21:28.977950] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:14.535 [2024-07-23 00:21:28.977963] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:14.535 [2024-07-23 00:21:28.977977] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.312 ms 00:17:14.535 [2024-07-23 00:21:28.977987] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:14.535 [2024-07-23 00:21:28.978061] mngt/ftl_mngt_md.c: 568:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 7c24566a-36b9-43b7-8fcf-8e163863318c 00:17:14.535 [2024-07-23 00:21:28.979501] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:14.535 [2024-07-23 00:21:28.979530] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:17:14.535 [2024-07-23 00:21:28.979542] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:17:14.535 [2024-07-23 00:21:28.979558] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:14.535 [2024-07-23 00:21:28.987195] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:14.535 [2024-07-23 00:21:28.987228] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:14.535 [2024-07-23 00:21:28.987241] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.607 ms 00:17:14.535 [2024-07-23 00:21:28.987268] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:14.535 [2024-07-23 00:21:28.987358] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:14.535 [2024-07-23 00:21:28.987381] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:14.535 [2024-07-23 00:21:28.987392] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:17:14.535 [2024-07-23 00:21:28.987429] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:14.535 [2024-07-23 00:21:28.987495] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:14.535 [2024-07-23 00:21:28.987511] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:14.535 [2024-07-23 00:21:28.987528] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:17:14.535 [2024-07-23 00:21:28.987541] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:14.535 [2024-07-23 00:21:28.987569] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:14.535 [2024-07-23 00:21:28.989435] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:14.535 [2024-07-23 00:21:28.989467] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:14.535 [2024-07-23 00:21:28.989483] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.873 ms 00:17:14.535 [2024-07-23 00:21:28.989493] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:14.535 [2024-07-23 00:21:28.989536] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:14.535 [2024-07-23 00:21:28.989547] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:14.535 [2024-07-23 00:21:28.989561] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:17:14.535 [2024-07-23 00:21:28.989570] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:14.535 [2024-07-23 00:21:28.989596] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:17:14.535 [2024-07-23 00:21:28.989740] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:17:14.535 [2024-07-23 00:21:28.989759] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:14.535 [2024-07-23 00:21:28.989772] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:17:14.535 [2024-07-23 00:21:28.989788] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:14.535 [2024-07-23 00:21:28.989800] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:14.535 [2024-07-23 00:21:28.989813] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:17:14.535 [2024-07-23 00:21:28.989823] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:14.535 [2024-07-23 00:21:28.989838] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:17:14.535 [2024-07-23 00:21:28.989848] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:17:14.535 [2024-07-23 00:21:28.989861] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:14.535 [2024-07-23 00:21:28.989871] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:14.535 [2024-07-23 00:21:28.989896] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.270 ms 00:17:14.535 [2024-07-23 00:21:28.989906] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:14.535 [2024-07-23 00:21:28.989978] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:14.535 [2024-07-23 00:21:28.989989] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:14.535 [2024-07-23 00:21:28.990004] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:17:14.535 [2024-07-23 00:21:28.990014] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:14.535 [2024-07-23 00:21:28.990100] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:14.535 [2024-07-23 00:21:28.990112] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:14.535 [2024-07-23 00:21:28.990126] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:14.535 [2024-07-23 00:21:28.990137] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:14.535 [2024-07-23 00:21:28.990149] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:14.535 [2024-07-23 00:21:28.990158] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:14.535 [2024-07-23 00:21:28.990171] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:17:14.535 [2024-07-23 00:21:28.990180] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:14.535 [2024-07-23 00:21:28.990191] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:17:14.535 [2024-07-23 00:21:28.990201] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:14.535 [2024-07-23 00:21:28.990212] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:14.535 [2024-07-23 00:21:28.990222] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:17:14.535 [2024-07-23 00:21:28.990234] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:14.535 [2024-07-23 00:21:28.990243] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:14.535 [2024-07-23 00:21:28.990257] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:17:14.535 [2024-07-23 00:21:28.990283] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:14.535 [2024-07-23 00:21:28.990295] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:14.535 [2024-07-23 00:21:28.990305] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:17:14.535 [2024-07-23 00:21:28.990316] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:14.535 [2024-07-23 00:21:28.990325] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:14.535 [2024-07-23 00:21:28.990337] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:17:14.535 [2024-07-23 00:21:28.990346] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:14.535 [2024-07-23 00:21:28.990357] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:14.535 [2024-07-23 00:21:28.990366] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:17:14.535 [2024-07-23 00:21:28.990378] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:14.535 [2024-07-23 00:21:28.990387] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:14.535 [2024-07-23 00:21:28.990399] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:17:14.535 [2024-07-23 00:21:28.990408] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:14.535 [2024-07-23 00:21:28.990419] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:14.535 [2024-07-23 00:21:28.990428] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:17:14.535 [2024-07-23 00:21:28.990443] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:14.535 [2024-07-23 00:21:28.990453] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:14.535 [2024-07-23 00:21:28.990467] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:17:14.535 [2024-07-23 00:21:28.990476] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:14.535 [2024-07-23 00:21:28.990488] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:14.535 [2024-07-23 00:21:28.990497] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:17:14.535 [2024-07-23 00:21:28.990508] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:14.535 [2024-07-23 00:21:28.990517] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:17:14.535 [2024-07-23 00:21:28.990528] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:17:14.535 [2024-07-23 00:21:28.990536] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:14.535 [2024-07-23 00:21:28.990547] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:17:14.535 [2024-07-23 00:21:28.990557] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:17:14.536 [2024-07-23 00:21:28.990568] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:14.536 [2024-07-23 00:21:28.990577] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:14.536 [2024-07-23 00:21:28.990589] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:14.536 [2024-07-23 00:21:28.990599] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:14.536 [2024-07-23 00:21:28.990621] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:14.536 [2024-07-23 00:21:28.990635] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:14.536 [2024-07-23 00:21:28.990647] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:14.536 [2024-07-23 00:21:28.990657] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:14.536 [2024-07-23 00:21:28.990668] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:14.536 [2024-07-23 00:21:28.990677] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:14.536 [2024-07-23 00:21:28.990690] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:14.536 [2024-07-23 00:21:28.990704] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:14.536 [2024-07-23 00:21:28.990719] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:14.536 [2024-07-23 00:21:28.990742] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:17:14.536 [2024-07-23 00:21:28.990755] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:17:14.536 [2024-07-23 00:21:28.990766] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:17:14.536 [2024-07-23 00:21:28.990778] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:17:14.536 [2024-07-23 00:21:28.990788] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:17:14.536 [2024-07-23 00:21:28.990801] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:17:14.536 [2024-07-23 00:21:28.990811] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:17:14.536 [2024-07-23 00:21:28.990826] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:17:14.536 [2024-07-23 00:21:28.990837] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:17:14.536 [2024-07-23 00:21:28.990850] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:17:14.536 [2024-07-23 00:21:28.990860] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:17:14.536 [2024-07-23 00:21:28.990872] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:17:14.536 [2024-07-23 00:21:28.990882] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:17:14.536 [2024-07-23 00:21:28.990895] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:17:14.536 [2024-07-23 00:21:28.990905] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:14.536 [2024-07-23 00:21:28.990918] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:14.536 [2024-07-23 00:21:28.990929] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:14.536 [2024-07-23 00:21:28.990941] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:14.536 [2024-07-23 00:21:28.990952] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:14.536 [2024-07-23 00:21:28.990965] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:14.536 [2024-07-23 00:21:28.990976] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:14.536 [2024-07-23 00:21:28.990989] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:14.536 [2024-07-23 00:21:28.990999] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.931 ms 00:17:14.536 [2024-07-23 00:21:28.991014] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:14.536 [2024-07-23 00:21:28.991055] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:17:14.536 [2024-07-23 00:21:28.991071] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:17:17.825 [2024-07-23 00:21:32.413597] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:17.825 [2024-07-23 00:21:32.413669] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:17:17.825 [2024-07-23 00:21:32.413687] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3428.095 ms 00:17:17.825 [2024-07-23 00:21:32.413700] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:17.825 [2024-07-23 00:21:32.425112] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:17.825 [2024-07-23 00:21:32.425184] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:17.825 [2024-07-23 00:21:32.425209] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.316 ms 00:17:17.825 [2024-07-23 00:21:32.425226] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:17.825 [2024-07-23 00:21:32.425367] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:17.825 [2024-07-23 00:21:32.425391] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:17.825 [2024-07-23 00:21:32.425402] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:17:17.825 [2024-07-23 00:21:32.425416] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:17.825 [2024-07-23 00:21:32.436346] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:17.825 [2024-07-23 00:21:32.436410] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:17.825 [2024-07-23 00:21:32.436432] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.890 ms 00:17:17.825 [2024-07-23 00:21:32.436446] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:17.825 [2024-07-23 00:21:32.436491] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:17.825 [2024-07-23 00:21:32.436505] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:17.825 [2024-07-23 00:21:32.436517] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:17:17.825 [2024-07-23 00:21:32.436530] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:17.825 [2024-07-23 00:21:32.437004] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:17.825 [2024-07-23 00:21:32.437020] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:17.825 [2024-07-23 00:21:32.437031] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.414 ms 00:17:17.825 [2024-07-23 00:21:32.437044] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:17.825 [2024-07-23 00:21:32.437210] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:17.825 [2024-07-23 00:21:32.437239] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:17.825 [2024-07-23 00:21:32.437249] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.130 ms 00:17:17.825 [2024-07-23 00:21:32.437291] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:17.825 [2024-07-23 00:21:32.444560] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:17.825 [2024-07-23 00:21:32.444603] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:17.825 [2024-07-23 00:21:32.444616] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.255 ms 00:17:17.825 [2024-07-23 00:21:32.444629] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:17.825 [2024-07-23 00:21:32.452460] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:17:17.825 [2024-07-23 00:21:32.455755] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:17.825 [2024-07-23 00:21:32.455789] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:17.825 [2024-07-23 00:21:32.455805] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.058 ms 00:17:17.825 [2024-07-23 00:21:32.455816] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.085 [2024-07-23 00:21:32.537601] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.085 [2024-07-23 00:21:32.537669] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:17:18.085 [2024-07-23 00:21:32.537689] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 81.872 ms 00:17:18.085 [2024-07-23 00:21:32.537704] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.085 [2024-07-23 00:21:32.537903] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.085 [2024-07-23 00:21:32.537916] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:18.085 [2024-07-23 00:21:32.537930] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.143 ms 00:17:18.085 [2024-07-23 00:21:32.537940] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.085 [2024-07-23 00:21:32.542121] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.085 [2024-07-23 00:21:32.542162] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:17:18.085 [2024-07-23 00:21:32.542178] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.143 ms 00:17:18.085 [2024-07-23 00:21:32.542192] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.085 [2024-07-23 00:21:32.545300] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.085 [2024-07-23 00:21:32.545451] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:17:18.085 [2024-07-23 00:21:32.545593] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.065 ms 00:17:18.085 [2024-07-23 00:21:32.545628] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.085 [2024-07-23 00:21:32.545922] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.085 [2024-07-23 00:21:32.545971] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:18.085 [2024-07-23 00:21:32.546006] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.232 ms 00:17:18.085 [2024-07-23 00:21:32.546106] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.085 [2024-07-23 00:21:32.586042] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.085 [2024-07-23 00:21:32.586279] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:17:18.085 [2024-07-23 00:21:32.586385] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.934 ms 00:17:18.085 [2024-07-23 00:21:32.586427] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.085 [2024-07-23 00:21:32.591152] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.085 [2024-07-23 00:21:32.591322] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:17:18.085 [2024-07-23 00:21:32.591414] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.662 ms 00:17:18.085 [2024-07-23 00:21:32.591452] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.085 [2024-07-23 00:21:32.594849] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.085 [2024-07-23 00:21:32.594980] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:17:18.085 [2024-07-23 00:21:32.595059] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.326 ms 00:17:18.085 [2024-07-23 00:21:32.595095] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.085 [2024-07-23 00:21:32.598789] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.085 [2024-07-23 00:21:32.598916] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:18.085 [2024-07-23 00:21:32.598993] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.618 ms 00:17:18.085 [2024-07-23 00:21:32.599028] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.085 [2024-07-23 00:21:32.599170] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.085 [2024-07-23 00:21:32.599212] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:18.085 [2024-07-23 00:21:32.599246] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:17:18.085 [2024-07-23 00:21:32.599383] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.085 [2024-07-23 00:21:32.599511] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.085 [2024-07-23 00:21:32.599546] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:18.085 [2024-07-23 00:21:32.599580] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:17:18.085 [2024-07-23 00:21:32.599608] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.085 [2024-07-23 00:21:32.600656] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3628.713 ms, result 0 00:17:18.085 { 00:17:18.085 "name": "ftl0", 00:17:18.085 "uuid": "7c24566a-36b9-43b7-8fcf-8e163863318c" 00:17:18.085 } 00:17:18.085 00:21:32 ftl.ftl_restore -- ftl/restore.sh@61 -- # echo '{"subsystems": [' 00:17:18.085 00:21:32 ftl.ftl_restore -- ftl/restore.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:17:18.345 00:21:32 ftl.ftl_restore -- ftl/restore.sh@63 -- # echo ']}' 00:17:18.345 00:21:32 ftl.ftl_restore -- ftl/restore.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:17:18.345 [2024-07-23 00:21:32.979536] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.345 [2024-07-23 00:21:32.979595] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:18.345 [2024-07-23 00:21:32.979626] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:18.345 [2024-07-23 00:21:32.979650] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.345 [2024-07-23 00:21:32.979678] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:18.345 [2024-07-23 00:21:32.980378] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.345 [2024-07-23 00:21:32.980396] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:18.346 [2024-07-23 00:21:32.980415] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.683 ms 00:17:18.346 [2024-07-23 00:21:32.980425] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.346 [2024-07-23 00:21:32.980645] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.346 [2024-07-23 00:21:32.980657] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:18.346 [2024-07-23 00:21:32.980669] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.196 ms 00:17:18.346 [2024-07-23 00:21:32.980679] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.346 [2024-07-23 00:21:32.983155] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.346 [2024-07-23 00:21:32.983180] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:18.346 [2024-07-23 00:21:32.983194] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.457 ms 00:17:18.346 [2024-07-23 00:21:32.983228] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.346 [2024-07-23 00:21:32.988372] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.346 [2024-07-23 00:21:32.988407] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:17:18.346 [2024-07-23 00:21:32.988423] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.124 ms 00:17:18.346 [2024-07-23 00:21:32.988433] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.346 [2024-07-23 00:21:32.990026] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.346 [2024-07-23 00:21:32.990067] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:18.346 [2024-07-23 00:21:32.990086] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.527 ms 00:17:18.346 [2024-07-23 00:21:32.990095] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.346 [2024-07-23 00:21:32.994760] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.346 [2024-07-23 00:21:32.994801] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:18.346 [2024-07-23 00:21:32.994817] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.631 ms 00:17:18.346 [2024-07-23 00:21:32.994827] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.346 [2024-07-23 00:21:32.994944] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.346 [2024-07-23 00:21:32.994956] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:18.346 [2024-07-23 00:21:32.994969] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.079 ms 00:17:18.346 [2024-07-23 00:21:32.994982] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.346 [2024-07-23 00:21:32.996701] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.346 [2024-07-23 00:21:32.996737] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:17:18.346 [2024-07-23 00:21:32.996751] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.698 ms 00:17:18.346 [2024-07-23 00:21:32.996760] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.346 [2024-07-23 00:21:32.998434] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.346 [2024-07-23 00:21:32.998470] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:17:18.346 [2024-07-23 00:21:32.998487] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.638 ms 00:17:18.346 [2024-07-23 00:21:32.998496] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.346 [2024-07-23 00:21:32.999776] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.346 [2024-07-23 00:21:32.999812] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:18.346 [2024-07-23 00:21:32.999826] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.245 ms 00:17:18.346 [2024-07-23 00:21:32.999835] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.346 [2024-07-23 00:21:33.001076] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.346 [2024-07-23 00:21:33.001129] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:18.346 [2024-07-23 00:21:33.001156] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.181 ms 00:17:18.346 [2024-07-23 00:21:33.001176] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.346 [2024-07-23 00:21:33.001233] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:18.346 [2024-07-23 00:21:33.001257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:18.346 [2024-07-23 00:21:33.001302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:18.346 [2024-07-23 00:21:33.001321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:18.346 [2024-07-23 00:21:33.001370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:18.346 [2024-07-23 00:21:33.001382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:18.346 [2024-07-23 00:21:33.001399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:18.346 [2024-07-23 00:21:33.001410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:18.346 [2024-07-23 00:21:33.001424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:18.346 [2024-07-23 00:21:33.001436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:18.346 [2024-07-23 00:21:33.001450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:18.346 [2024-07-23 00:21:33.001462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:18.346 [2024-07-23 00:21:33.001476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:18.346 [2024-07-23 00:21:33.001487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:18.346 [2024-07-23 00:21:33.001501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:18.346 [2024-07-23 00:21:33.001512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:18.346 [2024-07-23 00:21:33.001525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:18.346 [2024-07-23 00:21:33.001537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:18.346 [2024-07-23 00:21:33.001551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:18.346 [2024-07-23 00:21:33.001562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:18.346 [2024-07-23 00:21:33.001576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:18.346 [2024-07-23 00:21:33.001587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:18.346 [2024-07-23 00:21:33.001603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:18.346 [2024-07-23 00:21:33.001614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:18.346 [2024-07-23 00:21:33.001627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:18.346 [2024-07-23 00:21:33.001639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:18.346 [2024-07-23 00:21:33.001654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:18.346 [2024-07-23 00:21:33.001667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:18.346 [2024-07-23 00:21:33.001681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:18.346 [2024-07-23 00:21:33.001692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:18.346 [2024-07-23 00:21:33.001706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:18.346 [2024-07-23 00:21:33.001717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:18.346 [2024-07-23 00:21:33.001731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:18.346 [2024-07-23 00:21:33.001742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:18.346 [2024-07-23 00:21:33.001757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:18.346 [2024-07-23 00:21:33.001770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:18.346 [2024-07-23 00:21:33.001796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:18.346 [2024-07-23 00:21:33.001807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:18.346 [2024-07-23 00:21:33.001823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:18.346 [2024-07-23 00:21:33.001834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:18.346 [2024-07-23 00:21:33.001847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:18.346 [2024-07-23 00:21:33.001858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:18.346 [2024-07-23 00:21:33.001871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:18.346 [2024-07-23 00:21:33.001882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:18.346 [2024-07-23 00:21:33.001895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:18.346 [2024-07-23 00:21:33.001905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:18.346 [2024-07-23 00:21:33.001918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:18.346 [2024-07-23 00:21:33.001929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:18.346 [2024-07-23 00:21:33.001942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:18.346 [2024-07-23 00:21:33.001952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:18.346 [2024-07-23 00:21:33.001966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:18.346 [2024-07-23 00:21:33.001976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:18.346 [2024-07-23 00:21:33.001989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:18.346 [2024-07-23 00:21:33.002000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:18.347 [2024-07-23 00:21:33.002016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:18.347 [2024-07-23 00:21:33.002027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:18.347 [2024-07-23 00:21:33.002040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:18.347 [2024-07-23 00:21:33.002052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:18.347 [2024-07-23 00:21:33.002065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:18.347 [2024-07-23 00:21:33.002076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:18.347 [2024-07-23 00:21:33.002089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:18.347 [2024-07-23 00:21:33.002100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:18.347 [2024-07-23 00:21:33.002113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:18.347 [2024-07-23 00:21:33.002124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:18.347 [2024-07-23 00:21:33.002137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:18.347 [2024-07-23 00:21:33.002147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:18.347 [2024-07-23 00:21:33.002161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:18.347 [2024-07-23 00:21:33.002172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:18.347 [2024-07-23 00:21:33.002185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:18.347 [2024-07-23 00:21:33.002196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:18.347 [2024-07-23 00:21:33.002211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:18.347 [2024-07-23 00:21:33.002222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:18.347 [2024-07-23 00:21:33.002235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:18.347 [2024-07-23 00:21:33.002246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:18.347 [2024-07-23 00:21:33.002259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:18.347 [2024-07-23 00:21:33.002271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:18.347 [2024-07-23 00:21:33.002284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:18.347 [2024-07-23 00:21:33.002303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:18.347 [2024-07-23 00:21:33.002316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:18.347 [2024-07-23 00:21:33.002327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:18.347 [2024-07-23 00:21:33.002342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:18.347 [2024-07-23 00:21:33.002353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:18.347 [2024-07-23 00:21:33.002366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:18.347 [2024-07-23 00:21:33.002376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:18.347 [2024-07-23 00:21:33.002389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:18.347 [2024-07-23 00:21:33.002400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:18.347 [2024-07-23 00:21:33.002416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:18.347 [2024-07-23 00:21:33.002428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:18.347 [2024-07-23 00:21:33.002441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:18.347 [2024-07-23 00:21:33.002451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:18.347 [2024-07-23 00:21:33.002464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:18.347 [2024-07-23 00:21:33.002475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:18.347 [2024-07-23 00:21:33.002488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:18.347 [2024-07-23 00:21:33.002499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:18.347 [2024-07-23 00:21:33.002513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:18.347 [2024-07-23 00:21:33.002523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:18.347 [2024-07-23 00:21:33.002536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:18.347 [2024-07-23 00:21:33.002547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:18.347 [2024-07-23 00:21:33.002560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:18.347 [2024-07-23 00:21:33.002570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:18.347 [2024-07-23 00:21:33.002584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:18.347 [2024-07-23 00:21:33.002602] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:18.347 [2024-07-23 00:21:33.002618] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 7c24566a-36b9-43b7-8fcf-8e163863318c 00:17:18.347 [2024-07-23 00:21:33.002629] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:18.347 [2024-07-23 00:21:33.002641] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:18.347 [2024-07-23 00:21:33.002650] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:18.347 [2024-07-23 00:21:33.002664] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:18.347 [2024-07-23 00:21:33.002674] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:18.347 [2024-07-23 00:21:33.002686] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:18.347 [2024-07-23 00:21:33.002699] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:18.347 [2024-07-23 00:21:33.002710] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:18.347 [2024-07-23 00:21:33.002719] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:18.347 [2024-07-23 00:21:33.002732] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.347 [2024-07-23 00:21:33.002742] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:18.347 [2024-07-23 00:21:33.002755] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.504 ms 00:17:18.347 [2024-07-23 00:21:33.002765] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.347 [2024-07-23 00:21:33.004766] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.347 [2024-07-23 00:21:33.004798] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:18.347 [2024-07-23 00:21:33.004817] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.978 ms 00:17:18.347 [2024-07-23 00:21:33.004827] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.347 [2024-07-23 00:21:33.004943] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.347 [2024-07-23 00:21:33.004954] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:18.347 [2024-07-23 00:21:33.004967] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.084 ms 00:17:18.347 [2024-07-23 00:21:33.004977] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.347 [2024-07-23 00:21:33.012079] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:18.347 [2024-07-23 00:21:33.012105] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:18.347 [2024-07-23 00:21:33.012120] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:18.347 [2024-07-23 00:21:33.012133] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.347 [2024-07-23 00:21:33.012190] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:18.347 [2024-07-23 00:21:33.012201] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:18.347 [2024-07-23 00:21:33.012214] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:18.347 [2024-07-23 00:21:33.012224] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.347 [2024-07-23 00:21:33.012325] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:18.347 [2024-07-23 00:21:33.012345] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:18.347 [2024-07-23 00:21:33.012362] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:18.347 [2024-07-23 00:21:33.012371] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.347 [2024-07-23 00:21:33.012397] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:18.347 [2024-07-23 00:21:33.012407] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:18.347 [2024-07-23 00:21:33.012419] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:18.347 [2024-07-23 00:21:33.012429] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.347 [2024-07-23 00:21:33.025562] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:18.347 [2024-07-23 00:21:33.025610] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:18.347 [2024-07-23 00:21:33.025628] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:18.347 [2024-07-23 00:21:33.025639] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.607 [2024-07-23 00:21:33.034105] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:18.607 [2024-07-23 00:21:33.034147] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:18.607 [2024-07-23 00:21:33.034164] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:18.607 [2024-07-23 00:21:33.034174] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.607 [2024-07-23 00:21:33.034258] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:18.607 [2024-07-23 00:21:33.034294] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:18.607 [2024-07-23 00:21:33.034312] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:18.607 [2024-07-23 00:21:33.034322] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.607 [2024-07-23 00:21:33.034368] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:18.607 [2024-07-23 00:21:33.034403] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:18.607 [2024-07-23 00:21:33.034417] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:18.607 [2024-07-23 00:21:33.034426] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.607 [2024-07-23 00:21:33.034508] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:18.607 [2024-07-23 00:21:33.034521] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:18.607 [2024-07-23 00:21:33.034534] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:18.607 [2024-07-23 00:21:33.034544] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.607 [2024-07-23 00:21:33.034584] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:18.607 [2024-07-23 00:21:33.034596] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:18.607 [2024-07-23 00:21:33.034611] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:18.607 [2024-07-23 00:21:33.034620] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.607 [2024-07-23 00:21:33.034668] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:18.607 [2024-07-23 00:21:33.034679] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:18.607 [2024-07-23 00:21:33.034695] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:18.607 [2024-07-23 00:21:33.034704] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.607 [2024-07-23 00:21:33.034758] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:18.607 [2024-07-23 00:21:33.034774] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:18.607 [2024-07-23 00:21:33.034787] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:18.607 [2024-07-23 00:21:33.034797] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.607 [2024-07-23 00:21:33.034941] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 55.449 ms, result 0 00:17:18.607 true 00:17:18.607 00:21:33 ftl.ftl_restore -- ftl/restore.sh@66 -- # killprocess 89417 00:17:18.607 00:21:33 ftl.ftl_restore -- common/autotest_common.sh@946 -- # '[' -z 89417 ']' 00:17:18.607 00:21:33 ftl.ftl_restore -- common/autotest_common.sh@950 -- # kill -0 89417 00:17:18.607 00:21:33 ftl.ftl_restore -- common/autotest_common.sh@951 -- # uname 00:17:18.607 00:21:33 ftl.ftl_restore -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:17:18.607 00:21:33 ftl.ftl_restore -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 89417 00:17:18.608 killing process with pid 89417 00:17:18.608 00:21:33 ftl.ftl_restore -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:17:18.608 00:21:33 ftl.ftl_restore -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:17:18.608 00:21:33 ftl.ftl_restore -- common/autotest_common.sh@964 -- # echo 'killing process with pid 89417' 00:17:18.608 00:21:33 ftl.ftl_restore -- common/autotest_common.sh@965 -- # kill 89417 00:17:18.608 00:21:33 ftl.ftl_restore -- common/autotest_common.sh@970 -- # wait 89417 00:17:21.893 00:21:36 ftl.ftl_restore -- ftl/restore.sh@69 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile bs=4K count=256K 00:17:26.096 262144+0 records in 00:17:26.096 262144+0 records out 00:17:26.096 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 3.86868 s, 278 MB/s 00:17:26.096 00:21:40 ftl.ftl_restore -- ftl/restore.sh@70 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:17:27.471 00:21:41 ftl.ftl_restore -- ftl/restore.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:27.471 [2024-07-23 00:21:41.831952] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:17:27.471 [2024-07-23 00:21:41.832116] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89619 ] 00:17:27.471 [2024-07-23 00:21:41.984192] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:27.471 [2024-07-23 00:21:42.027351] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:27.471 [2024-07-23 00:21:42.130635] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:27.471 [2024-07-23 00:21:42.130712] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:27.730 [2024-07-23 00:21:42.282382] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:27.730 [2024-07-23 00:21:42.282452] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:27.730 [2024-07-23 00:21:42.282491] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:17:27.730 [2024-07-23 00:21:42.282502] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:27.730 [2024-07-23 00:21:42.282553] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:27.730 [2024-07-23 00:21:42.282564] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:27.730 [2024-07-23 00:21:42.282582] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:17:27.730 [2024-07-23 00:21:42.282595] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:27.730 [2024-07-23 00:21:42.282616] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:27.730 [2024-07-23 00:21:42.282818] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:27.730 [2024-07-23 00:21:42.282836] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:27.730 [2024-07-23 00:21:42.282849] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:27.730 [2024-07-23 00:21:42.282860] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.225 ms 00:17:27.730 [2024-07-23 00:21:42.282870] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:27.730 [2024-07-23 00:21:42.284305] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:17:27.730 [2024-07-23 00:21:42.286870] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:27.730 [2024-07-23 00:21:42.286909] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:17:27.730 [2024-07-23 00:21:42.286927] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.571 ms 00:17:27.730 [2024-07-23 00:21:42.286945] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:27.730 [2024-07-23 00:21:42.287003] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:27.730 [2024-07-23 00:21:42.287022] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:17:27.730 [2024-07-23 00:21:42.287033] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:17:27.730 [2024-07-23 00:21:42.287043] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:27.730 [2024-07-23 00:21:42.293878] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:27.730 [2024-07-23 00:21:42.293913] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:27.730 [2024-07-23 00:21:42.293925] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.790 ms 00:17:27.730 [2024-07-23 00:21:42.293935] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:27.730 [2024-07-23 00:21:42.294028] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:27.730 [2024-07-23 00:21:42.294042] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:27.730 [2024-07-23 00:21:42.294053] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.073 ms 00:17:27.730 [2024-07-23 00:21:42.294063] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:27.730 [2024-07-23 00:21:42.294125] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:27.730 [2024-07-23 00:21:42.294137] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:27.730 [2024-07-23 00:21:42.294155] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:17:27.730 [2024-07-23 00:21:42.294165] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:27.730 [2024-07-23 00:21:42.294201] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:27.730 [2024-07-23 00:21:42.295853] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:27.730 [2024-07-23 00:21:42.295882] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:27.730 [2024-07-23 00:21:42.295894] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.664 ms 00:17:27.730 [2024-07-23 00:21:42.295913] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:27.730 [2024-07-23 00:21:42.295946] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:27.730 [2024-07-23 00:21:42.295956] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:27.730 [2024-07-23 00:21:42.295970] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:17:27.730 [2024-07-23 00:21:42.295979] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:27.730 [2024-07-23 00:21:42.296002] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:17:27.730 [2024-07-23 00:21:42.296023] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:17:27.730 [2024-07-23 00:21:42.296063] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:17:27.730 [2024-07-23 00:21:42.296082] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:17:27.730 [2024-07-23 00:21:42.296164] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:17:27.730 [2024-07-23 00:21:42.296181] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:27.730 [2024-07-23 00:21:42.296197] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:17:27.731 [2024-07-23 00:21:42.296210] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:27.731 [2024-07-23 00:21:42.296222] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:27.731 [2024-07-23 00:21:42.296233] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:17:27.731 [2024-07-23 00:21:42.296242] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:27.731 [2024-07-23 00:21:42.296252] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:17:27.731 [2024-07-23 00:21:42.296283] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:17:27.731 [2024-07-23 00:21:42.296294] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:27.731 [2024-07-23 00:21:42.296304] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:27.731 [2024-07-23 00:21:42.296314] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.295 ms 00:17:27.731 [2024-07-23 00:21:42.296334] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:27.731 [2024-07-23 00:21:42.296401] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:27.731 [2024-07-23 00:21:42.296412] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:27.731 [2024-07-23 00:21:42.296422] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:17:27.731 [2024-07-23 00:21:42.296431] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:27.731 [2024-07-23 00:21:42.296522] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:27.731 [2024-07-23 00:21:42.296536] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:27.731 [2024-07-23 00:21:42.296547] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:27.731 [2024-07-23 00:21:42.296557] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:27.731 [2024-07-23 00:21:42.296570] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:27.731 [2024-07-23 00:21:42.296579] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:27.731 [2024-07-23 00:21:42.296589] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:17:27.731 [2024-07-23 00:21:42.296599] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:27.731 [2024-07-23 00:21:42.296609] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:17:27.731 [2024-07-23 00:21:42.296618] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:27.731 [2024-07-23 00:21:42.296628] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:27.731 [2024-07-23 00:21:42.296637] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:17:27.731 [2024-07-23 00:21:42.296646] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:27.731 [2024-07-23 00:21:42.296655] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:27.731 [2024-07-23 00:21:42.296664] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:17:27.731 [2024-07-23 00:21:42.296673] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:27.731 [2024-07-23 00:21:42.296687] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:27.731 [2024-07-23 00:21:42.296696] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:17:27.731 [2024-07-23 00:21:42.296705] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:27.731 [2024-07-23 00:21:42.296714] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:27.731 [2024-07-23 00:21:42.296723] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:17:27.731 [2024-07-23 00:21:42.296732] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:27.731 [2024-07-23 00:21:42.296741] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:27.731 [2024-07-23 00:21:42.296750] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:17:27.731 [2024-07-23 00:21:42.296759] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:27.731 [2024-07-23 00:21:42.296768] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:27.731 [2024-07-23 00:21:42.296777] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:17:27.731 [2024-07-23 00:21:42.296786] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:27.731 [2024-07-23 00:21:42.296795] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:27.731 [2024-07-23 00:21:42.296804] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:17:27.731 [2024-07-23 00:21:42.296812] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:27.731 [2024-07-23 00:21:42.296821] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:27.731 [2024-07-23 00:21:42.296835] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:17:27.731 [2024-07-23 00:21:42.296844] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:27.731 [2024-07-23 00:21:42.296853] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:27.731 [2024-07-23 00:21:42.296862] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:17:27.731 [2024-07-23 00:21:42.296871] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:27.731 [2024-07-23 00:21:42.296880] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:17:27.731 [2024-07-23 00:21:42.296889] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:17:27.731 [2024-07-23 00:21:42.296897] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:27.731 [2024-07-23 00:21:42.296906] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:17:27.731 [2024-07-23 00:21:42.296917] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:17:27.731 [2024-07-23 00:21:42.296927] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:27.731 [2024-07-23 00:21:42.296936] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:27.731 [2024-07-23 00:21:42.296946] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:27.731 [2024-07-23 00:21:42.296955] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:27.731 [2024-07-23 00:21:42.296965] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:27.731 [2024-07-23 00:21:42.296974] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:27.731 [2024-07-23 00:21:42.296987] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:27.731 [2024-07-23 00:21:42.296996] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:27.731 [2024-07-23 00:21:42.297005] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:27.731 [2024-07-23 00:21:42.297014] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:27.731 [2024-07-23 00:21:42.297023] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:27.731 [2024-07-23 00:21:42.297033] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:27.731 [2024-07-23 00:21:42.297044] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:27.731 [2024-07-23 00:21:42.297078] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:17:27.731 [2024-07-23 00:21:42.297097] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:17:27.731 [2024-07-23 00:21:42.297114] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:17:27.731 [2024-07-23 00:21:42.297131] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:17:27.731 [2024-07-23 00:21:42.297148] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:17:27.731 [2024-07-23 00:21:42.297166] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:17:27.731 [2024-07-23 00:21:42.297183] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:17:27.731 [2024-07-23 00:21:42.297194] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:17:27.731 [2024-07-23 00:21:42.297204] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:17:27.731 [2024-07-23 00:21:42.297218] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:17:27.731 [2024-07-23 00:21:42.297229] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:17:27.731 [2024-07-23 00:21:42.297239] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:17:27.731 [2024-07-23 00:21:42.297249] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:17:27.731 [2024-07-23 00:21:42.297259] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:17:27.731 [2024-07-23 00:21:42.297559] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:27.731 [2024-07-23 00:21:42.297619] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:27.731 [2024-07-23 00:21:42.297666] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:27.731 [2024-07-23 00:21:42.297712] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:27.731 [2024-07-23 00:21:42.297770] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:27.731 [2024-07-23 00:21:42.297872] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:27.731 [2024-07-23 00:21:42.297924] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:27.731 [2024-07-23 00:21:42.297954] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:27.731 [2024-07-23 00:21:42.297984] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.457 ms 00:17:27.731 [2024-07-23 00:21:42.298018] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:27.731 [2024-07-23 00:21:42.320865] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:27.731 [2024-07-23 00:21:42.321101] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:27.731 [2024-07-23 00:21:42.321277] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.782 ms 00:17:27.731 [2024-07-23 00:21:42.321332] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:27.731 [2024-07-23 00:21:42.321469] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:27.732 [2024-07-23 00:21:42.321575] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:27.732 [2024-07-23 00:21:42.321622] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.074 ms 00:17:27.732 [2024-07-23 00:21:42.321668] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:27.732 [2024-07-23 00:21:42.333323] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:27.732 [2024-07-23 00:21:42.333498] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:27.732 [2024-07-23 00:21:42.333628] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.515 ms 00:17:27.732 [2024-07-23 00:21:42.333678] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:27.732 [2024-07-23 00:21:42.333764] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:27.732 [2024-07-23 00:21:42.333807] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:27.732 [2024-07-23 00:21:42.333847] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:27.732 [2024-07-23 00:21:42.333944] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:27.732 [2024-07-23 00:21:42.334489] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:27.732 [2024-07-23 00:21:42.334509] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:27.732 [2024-07-23 00:21:42.334524] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.466 ms 00:17:27.732 [2024-07-23 00:21:42.334536] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:27.732 [2024-07-23 00:21:42.334684] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:27.732 [2024-07-23 00:21:42.334706] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:27.732 [2024-07-23 00:21:42.334720] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.119 ms 00:17:27.732 [2024-07-23 00:21:42.334733] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:27.732 [2024-07-23 00:21:42.340812] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:27.732 [2024-07-23 00:21:42.340845] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:27.732 [2024-07-23 00:21:42.340858] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.060 ms 00:17:27.732 [2024-07-23 00:21:42.340867] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:27.732 [2024-07-23 00:21:42.343603] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:17:27.732 [2024-07-23 00:21:42.343639] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:17:27.732 [2024-07-23 00:21:42.343654] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:27.732 [2024-07-23 00:21:42.343664] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:17:27.732 [2024-07-23 00:21:42.343674] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.695 ms 00:17:27.732 [2024-07-23 00:21:42.343684] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:27.732 [2024-07-23 00:21:42.356338] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:27.732 [2024-07-23 00:21:42.356489] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:17:27.732 [2024-07-23 00:21:42.356571] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.635 ms 00:17:27.732 [2024-07-23 00:21:42.356607] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:27.732 [2024-07-23 00:21:42.358346] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:27.732 [2024-07-23 00:21:42.358472] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:17:27.732 [2024-07-23 00:21:42.358549] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.674 ms 00:17:27.732 [2024-07-23 00:21:42.358583] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:27.732 [2024-07-23 00:21:42.360054] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:27.732 [2024-07-23 00:21:42.360179] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:17:27.732 [2024-07-23 00:21:42.360197] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.420 ms 00:17:27.732 [2024-07-23 00:21:42.360207] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:27.732 [2024-07-23 00:21:42.360497] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:27.732 [2024-07-23 00:21:42.360514] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:27.732 [2024-07-23 00:21:42.360526] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.212 ms 00:17:27.732 [2024-07-23 00:21:42.360535] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:27.732 [2024-07-23 00:21:42.381804] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:27.732 [2024-07-23 00:21:42.381873] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:17:27.732 [2024-07-23 00:21:42.381891] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.277 ms 00:17:27.732 [2024-07-23 00:21:42.381901] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:27.732 [2024-07-23 00:21:42.388098] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:17:27.732 [2024-07-23 00:21:42.390679] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:27.732 [2024-07-23 00:21:42.390713] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:27.732 [2024-07-23 00:21:42.390733] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.747 ms 00:17:27.732 [2024-07-23 00:21:42.390759] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:27.732 [2024-07-23 00:21:42.390831] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:27.732 [2024-07-23 00:21:42.390844] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:17:27.732 [2024-07-23 00:21:42.390854] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:17:27.732 [2024-07-23 00:21:42.390871] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:27.732 [2024-07-23 00:21:42.390933] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:27.732 [2024-07-23 00:21:42.390945] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:27.732 [2024-07-23 00:21:42.390966] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:17:27.732 [2024-07-23 00:21:42.390979] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:27.732 [2024-07-23 00:21:42.390999] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:27.732 [2024-07-23 00:21:42.391017] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:27.732 [2024-07-23 00:21:42.391027] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:17:27.732 [2024-07-23 00:21:42.391037] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:27.732 [2024-07-23 00:21:42.391069] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:17:27.732 [2024-07-23 00:21:42.391081] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:27.732 [2024-07-23 00:21:42.391091] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:17:27.732 [2024-07-23 00:21:42.391101] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:17:27.732 [2024-07-23 00:21:42.391123] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:27.732 [2024-07-23 00:21:42.394818] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:27.732 [2024-07-23 00:21:42.394854] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:27.732 [2024-07-23 00:21:42.394877] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.681 ms 00:17:27.732 [2024-07-23 00:21:42.394887] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:27.732 [2024-07-23 00:21:42.394957] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:27.732 [2024-07-23 00:21:42.394968] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:27.732 [2024-07-23 00:21:42.394980] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:17:27.732 [2024-07-23 00:21:42.394990] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:27.732 [2024-07-23 00:21:42.396074] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 113.484 ms, result 0 00:18:07.268  Copying: 24/1024 [MB] (24 MBps) Copying: 50/1024 [MB] (25 MBps) Copying: 75/1024 [MB] (25 MBps) Copying: 101/1024 [MB] (25 MBps) Copying: 126/1024 [MB] (25 MBps) Copying: 151/1024 [MB] (25 MBps) Copying: 179/1024 [MB] (27 MBps) Copying: 205/1024 [MB] (26 MBps) Copying: 231/1024 [MB] (25 MBps) Copying: 257/1024 [MB] (25 MBps) Copying: 282/1024 [MB] (25 MBps) Copying: 309/1024 [MB] (26 MBps) Copying: 335/1024 [MB] (26 MBps) Copying: 361/1024 [MB] (25 MBps) Copying: 387/1024 [MB] (25 MBps) Copying: 412/1024 [MB] (25 MBps) Copying: 438/1024 [MB] (26 MBps) Copying: 464/1024 [MB] (25 MBps) Copying: 490/1024 [MB] (25 MBps) Copying: 516/1024 [MB] (26 MBps) Copying: 542/1024 [MB] (25 MBps) Copying: 569/1024 [MB] (26 MBps) Copying: 595/1024 [MB] (25 MBps) Copying: 623/1024 [MB] (27 MBps) Copying: 649/1024 [MB] (26 MBps) Copying: 675/1024 [MB] (25 MBps) Copying: 701/1024 [MB] (25 MBps) Copying: 727/1024 [MB] (25 MBps) Copying: 753/1024 [MB] (25 MBps) Copying: 779/1024 [MB] (26 MBps) Copying: 805/1024 [MB] (26 MBps) Copying: 831/1024 [MB] (26 MBps) Copying: 857/1024 [MB] (26 MBps) Copying: 883/1024 [MB] (25 MBps) Copying: 909/1024 [MB] (25 MBps) Copying: 935/1024 [MB] (25 MBps) Copying: 961/1024 [MB] (26 MBps) Copying: 987/1024 [MB] (26 MBps) Copying: 1013/1024 [MB] (26 MBps) Copying: 1024/1024 [MB] (average 26 MBps)[2024-07-23 00:22:21.743954] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:07.268 [2024-07-23 00:22:21.744001] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:18:07.268 [2024-07-23 00:22:21.744019] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:18:07.268 [2024-07-23 00:22:21.744038] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:07.268 [2024-07-23 00:22:21.744059] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:07.268 [2024-07-23 00:22:21.744738] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:07.268 [2024-07-23 00:22:21.744752] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:18:07.268 [2024-07-23 00:22:21.744763] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.665 ms 00:18:07.268 [2024-07-23 00:22:21.744772] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:07.268 [2024-07-23 00:22:21.746436] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:07.268 [2024-07-23 00:22:21.746474] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:18:07.268 [2024-07-23 00:22:21.746487] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.646 ms 00:18:07.268 [2024-07-23 00:22:21.746496] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:07.268 [2024-07-23 00:22:21.763974] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:07.268 [2024-07-23 00:22:21.764015] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:18:07.268 [2024-07-23 00:22:21.764029] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.473 ms 00:18:07.268 [2024-07-23 00:22:21.764039] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:07.268 [2024-07-23 00:22:21.769124] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:07.268 [2024-07-23 00:22:21.769156] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:18:07.268 [2024-07-23 00:22:21.769168] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.060 ms 00:18:07.268 [2024-07-23 00:22:21.769178] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:07.268 [2024-07-23 00:22:21.770670] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:07.268 [2024-07-23 00:22:21.770705] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:18:07.268 [2024-07-23 00:22:21.770717] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.439 ms 00:18:07.268 [2024-07-23 00:22:21.770727] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:07.268 [2024-07-23 00:22:21.774247] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:07.268 [2024-07-23 00:22:21.774291] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:18:07.268 [2024-07-23 00:22:21.774303] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.498 ms 00:18:07.268 [2024-07-23 00:22:21.774312] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:07.268 [2024-07-23 00:22:21.774430] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:07.268 [2024-07-23 00:22:21.774443] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:18:07.268 [2024-07-23 00:22:21.774454] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.089 ms 00:18:07.268 [2024-07-23 00:22:21.774464] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:07.268 [2024-07-23 00:22:21.776450] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:07.268 [2024-07-23 00:22:21.776483] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:18:07.268 [2024-07-23 00:22:21.776494] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.970 ms 00:18:07.268 [2024-07-23 00:22:21.776503] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:07.268 [2024-07-23 00:22:21.778010] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:07.268 [2024-07-23 00:22:21.778044] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:18:07.268 [2024-07-23 00:22:21.778056] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.481 ms 00:18:07.268 [2024-07-23 00:22:21.778065] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:07.268 [2024-07-23 00:22:21.779364] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:07.268 [2024-07-23 00:22:21.779395] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:18:07.268 [2024-07-23 00:22:21.779406] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.275 ms 00:18:07.268 [2024-07-23 00:22:21.779416] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:07.268 [2024-07-23 00:22:21.780463] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:07.268 [2024-07-23 00:22:21.780494] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:18:07.268 [2024-07-23 00:22:21.780505] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.002 ms 00:18:07.268 [2024-07-23 00:22:21.780514] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:07.268 [2024-07-23 00:22:21.780538] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:18:07.268 [2024-07-23 00:22:21.780553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:18:07.268 [2024-07-23 00:22:21.780565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:18:07.268 [2024-07-23 00:22:21.780576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:18:07.268 [2024-07-23 00:22:21.780586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:18:07.268 [2024-07-23 00:22:21.780596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:18:07.268 [2024-07-23 00:22:21.780607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:18:07.268 [2024-07-23 00:22:21.780617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:18:07.268 [2024-07-23 00:22:21.780627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:18:07.268 [2024-07-23 00:22:21.780638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:18:07.268 [2024-07-23 00:22:21.780648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:18:07.268 [2024-07-23 00:22:21.780658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:18:07.268 [2024-07-23 00:22:21.780669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:18:07.268 [2024-07-23 00:22:21.780679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:18:07.268 [2024-07-23 00:22:21.780689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:18:07.268 [2024-07-23 00:22:21.780699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:18:07.268 [2024-07-23 00:22:21.780709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:18:07.268 [2024-07-23 00:22:21.780719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:18:07.268 [2024-07-23 00:22:21.780729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:18:07.268 [2024-07-23 00:22:21.780740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:18:07.268 [2024-07-23 00:22:21.780750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:18:07.268 [2024-07-23 00:22:21.780760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:18:07.268 [2024-07-23 00:22:21.780771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:18:07.268 [2024-07-23 00:22:21.780781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:18:07.268 [2024-07-23 00:22:21.780791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:18:07.268 [2024-07-23 00:22:21.780801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:18:07.268 [2024-07-23 00:22:21.780811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:18:07.268 [2024-07-23 00:22:21.780823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:18:07.268 [2024-07-23 00:22:21.780833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:18:07.268 [2024-07-23 00:22:21.780843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:18:07.268 [2024-07-23 00:22:21.780853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:18:07.268 [2024-07-23 00:22:21.780864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:18:07.268 [2024-07-23 00:22:21.780875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:18:07.268 [2024-07-23 00:22:21.780886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:18:07.268 [2024-07-23 00:22:21.780896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:18:07.268 [2024-07-23 00:22:21.780907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:18:07.268 [2024-07-23 00:22:21.780917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:18:07.268 [2024-07-23 00:22:21.780927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:18:07.268 [2024-07-23 00:22:21.780938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:18:07.268 [2024-07-23 00:22:21.780948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:18:07.269 [2024-07-23 00:22:21.780959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:18:07.269 [2024-07-23 00:22:21.780969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:18:07.269 [2024-07-23 00:22:21.780979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:18:07.269 [2024-07-23 00:22:21.780989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:18:07.269 [2024-07-23 00:22:21.780999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:18:07.269 [2024-07-23 00:22:21.781009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:18:07.269 [2024-07-23 00:22:21.781019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:18:07.269 [2024-07-23 00:22:21.781030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:18:07.269 [2024-07-23 00:22:21.781049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:18:07.269 [2024-07-23 00:22:21.781060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:18:07.269 [2024-07-23 00:22:21.781070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:18:07.269 [2024-07-23 00:22:21.781080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:18:07.269 [2024-07-23 00:22:21.781090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:18:07.269 [2024-07-23 00:22:21.781101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:18:07.269 [2024-07-23 00:22:21.781111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:18:07.269 [2024-07-23 00:22:21.781121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:18:07.269 [2024-07-23 00:22:21.781132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:18:07.269 [2024-07-23 00:22:21.781155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:18:07.269 [2024-07-23 00:22:21.781166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:18:07.269 [2024-07-23 00:22:21.781176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:18:07.269 [2024-07-23 00:22:21.781186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:18:07.269 [2024-07-23 00:22:21.781197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:18:07.269 [2024-07-23 00:22:21.781207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:18:07.269 [2024-07-23 00:22:21.781217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:18:07.269 [2024-07-23 00:22:21.781228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:18:07.269 [2024-07-23 00:22:21.781238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:18:07.269 [2024-07-23 00:22:21.781248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:18:07.269 [2024-07-23 00:22:21.781273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:18:07.269 [2024-07-23 00:22:21.781285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:18:07.269 [2024-07-23 00:22:21.781296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:18:07.269 [2024-07-23 00:22:21.781306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:18:07.269 [2024-07-23 00:22:21.781316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:18:07.269 [2024-07-23 00:22:21.781326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:18:07.269 [2024-07-23 00:22:21.781337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:18:07.269 [2024-07-23 00:22:21.781347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:18:07.269 [2024-07-23 00:22:21.781357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:18:07.269 [2024-07-23 00:22:21.781367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:18:07.269 [2024-07-23 00:22:21.781378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:18:07.269 [2024-07-23 00:22:21.781388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:18:07.269 [2024-07-23 00:22:21.781399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:18:07.269 [2024-07-23 00:22:21.781409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:18:07.269 [2024-07-23 00:22:21.781419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:18:07.269 [2024-07-23 00:22:21.781429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:18:07.269 [2024-07-23 00:22:21.781439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:18:07.269 [2024-07-23 00:22:21.781450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:18:07.269 [2024-07-23 00:22:21.781461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:18:07.269 [2024-07-23 00:22:21.781472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:18:07.269 [2024-07-23 00:22:21.781482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:18:07.269 [2024-07-23 00:22:21.781492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:18:07.269 [2024-07-23 00:22:21.781502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:18:07.269 [2024-07-23 00:22:21.781512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:18:07.269 [2024-07-23 00:22:21.781523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:18:07.269 [2024-07-23 00:22:21.781533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:18:07.269 [2024-07-23 00:22:21.781543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:18:07.269 [2024-07-23 00:22:21.781553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:18:07.269 [2024-07-23 00:22:21.781568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:18:07.269 [2024-07-23 00:22:21.781578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:18:07.269 [2024-07-23 00:22:21.781591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:18:07.269 [2024-07-23 00:22:21.781601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:18:07.269 [2024-07-23 00:22:21.781611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:18:07.269 [2024-07-23 00:22:21.781621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:18:07.269 [2024-07-23 00:22:21.781639] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:18:07.269 [2024-07-23 00:22:21.781648] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 7c24566a-36b9-43b7-8fcf-8e163863318c 00:18:07.269 [2024-07-23 00:22:21.781660] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:18:07.269 [2024-07-23 00:22:21.781669] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:18:07.269 [2024-07-23 00:22:21.781678] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:18:07.269 [2024-07-23 00:22:21.781688] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:18:07.269 [2024-07-23 00:22:21.781705] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:18:07.269 [2024-07-23 00:22:21.781715] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:18:07.269 [2024-07-23 00:22:21.781724] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:18:07.269 [2024-07-23 00:22:21.781733] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:18:07.269 [2024-07-23 00:22:21.781742] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:18:07.269 [2024-07-23 00:22:21.781751] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:07.269 [2024-07-23 00:22:21.781767] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:18:07.269 [2024-07-23 00:22:21.781777] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.216 ms 00:18:07.269 [2024-07-23 00:22:21.781787] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:07.269 [2024-07-23 00:22:21.783454] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:07.269 [2024-07-23 00:22:21.783474] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:18:07.269 [2024-07-23 00:22:21.783485] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.654 ms 00:18:07.269 [2024-07-23 00:22:21.783503] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:07.269 [2024-07-23 00:22:21.783607] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:07.269 [2024-07-23 00:22:21.783618] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:18:07.269 [2024-07-23 00:22:21.783628] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.086 ms 00:18:07.269 [2024-07-23 00:22:21.783637] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:07.269 [2024-07-23 00:22:21.789571] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:07.269 [2024-07-23 00:22:21.789594] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:07.269 [2024-07-23 00:22:21.789605] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:07.269 [2024-07-23 00:22:21.789615] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:07.269 [2024-07-23 00:22:21.789668] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:07.269 [2024-07-23 00:22:21.789680] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:07.269 [2024-07-23 00:22:21.789690] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:07.269 [2024-07-23 00:22:21.789699] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:07.269 [2024-07-23 00:22:21.789754] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:07.269 [2024-07-23 00:22:21.789766] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:07.270 [2024-07-23 00:22:21.789777] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:07.270 [2024-07-23 00:22:21.789786] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:07.270 [2024-07-23 00:22:21.789801] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:07.270 [2024-07-23 00:22:21.789816] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:07.270 [2024-07-23 00:22:21.789826] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:07.270 [2024-07-23 00:22:21.789836] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:07.270 [2024-07-23 00:22:21.801307] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:07.270 [2024-07-23 00:22:21.801471] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:07.270 [2024-07-23 00:22:21.801556] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:07.270 [2024-07-23 00:22:21.801592] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:07.270 [2024-07-23 00:22:21.809769] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:07.270 [2024-07-23 00:22:21.809912] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:07.270 [2024-07-23 00:22:21.810032] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:07.270 [2024-07-23 00:22:21.810070] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:07.270 [2024-07-23 00:22:21.810148] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:07.270 [2024-07-23 00:22:21.810181] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:07.270 [2024-07-23 00:22:21.810210] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:07.270 [2024-07-23 00:22:21.810239] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:07.270 [2024-07-23 00:22:21.810359] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:07.270 [2024-07-23 00:22:21.810399] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:07.270 [2024-07-23 00:22:21.810445] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:07.270 [2024-07-23 00:22:21.810474] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:07.270 [2024-07-23 00:22:21.810555] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:07.270 [2024-07-23 00:22:21.810575] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:07.270 [2024-07-23 00:22:21.810586] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:07.270 [2024-07-23 00:22:21.810596] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:07.270 [2024-07-23 00:22:21.810636] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:07.270 [2024-07-23 00:22:21.810648] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:18:07.270 [2024-07-23 00:22:21.810659] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:07.270 [2024-07-23 00:22:21.810673] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:07.270 [2024-07-23 00:22:21.810710] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:07.270 [2024-07-23 00:22:21.810720] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:07.270 [2024-07-23 00:22:21.810731] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:07.270 [2024-07-23 00:22:21.810741] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:07.270 [2024-07-23 00:22:21.810783] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:07.270 [2024-07-23 00:22:21.810794] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:07.270 [2024-07-23 00:22:21.810808] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:07.270 [2024-07-23 00:22:21.810817] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:07.270 [2024-07-23 00:22:21.810934] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 67.055 ms, result 0 00:18:07.529 00:18:07.529 00:18:07.529 00:22:22 ftl.ftl_restore -- ftl/restore.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --count=262144 00:18:07.789 [2024-07-23 00:22:22.219671] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:18:07.789 [2024-07-23 00:22:22.219913] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90031 ] 00:18:07.789 [2024-07-23 00:22:22.371090] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:07.789 [2024-07-23 00:22:22.414771] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:08.049 [2024-07-23 00:22:22.516185] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:08.049 [2024-07-23 00:22:22.516257] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:08.049 [2024-07-23 00:22:22.666834] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:08.049 [2024-07-23 00:22:22.666890] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:18:08.049 [2024-07-23 00:22:22.666906] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:18:08.049 [2024-07-23 00:22:22.666916] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.049 [2024-07-23 00:22:22.666962] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:08.049 [2024-07-23 00:22:22.666975] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:08.049 [2024-07-23 00:22:22.666985] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:18:08.049 [2024-07-23 00:22:22.666998] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.049 [2024-07-23 00:22:22.667019] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:18:08.049 [2024-07-23 00:22:22.667216] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:18:08.049 [2024-07-23 00:22:22.667234] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:08.049 [2024-07-23 00:22:22.667247] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:08.049 [2024-07-23 00:22:22.667258] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.220 ms 00:18:08.049 [2024-07-23 00:22:22.667287] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.049 [2024-07-23 00:22:22.668677] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:18:08.049 [2024-07-23 00:22:22.671030] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:08.049 [2024-07-23 00:22:22.671066] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:18:08.049 [2024-07-23 00:22:22.671083] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.358 ms 00:18:08.049 [2024-07-23 00:22:22.671093] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.049 [2024-07-23 00:22:22.671147] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:08.049 [2024-07-23 00:22:22.671159] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:18:08.049 [2024-07-23 00:22:22.671169] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:18:08.049 [2024-07-23 00:22:22.671179] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.049 [2024-07-23 00:22:22.677759] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:08.049 [2024-07-23 00:22:22.677790] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:08.049 [2024-07-23 00:22:22.677802] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.542 ms 00:18:08.049 [2024-07-23 00:22:22.677812] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.049 [2024-07-23 00:22:22.677905] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:08.049 [2024-07-23 00:22:22.677919] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:08.049 [2024-07-23 00:22:22.677929] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:18:08.049 [2024-07-23 00:22:22.677939] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.049 [2024-07-23 00:22:22.677994] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:08.049 [2024-07-23 00:22:22.678005] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:18:08.049 [2024-07-23 00:22:22.678023] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:18:08.049 [2024-07-23 00:22:22.678033] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.049 [2024-07-23 00:22:22.678059] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:08.049 [2024-07-23 00:22:22.679674] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:08.049 [2024-07-23 00:22:22.679703] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:08.049 [2024-07-23 00:22:22.679715] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.627 ms 00:18:08.049 [2024-07-23 00:22:22.679725] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.049 [2024-07-23 00:22:22.679756] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:08.049 [2024-07-23 00:22:22.679767] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:18:08.049 [2024-07-23 00:22:22.679780] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:18:08.049 [2024-07-23 00:22:22.679797] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.049 [2024-07-23 00:22:22.679818] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:18:08.049 [2024-07-23 00:22:22.679841] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:18:08.049 [2024-07-23 00:22:22.679886] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:18:08.049 [2024-07-23 00:22:22.679905] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:18:08.049 [2024-07-23 00:22:22.679987] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:18:08.049 [2024-07-23 00:22:22.680010] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:18:08.049 [2024-07-23 00:22:22.680025] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:18:08.049 [2024-07-23 00:22:22.680044] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:18:08.049 [2024-07-23 00:22:22.680062] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:18:08.049 [2024-07-23 00:22:22.680073] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:18:08.049 [2024-07-23 00:22:22.680083] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:18:08.049 [2024-07-23 00:22:22.680092] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:18:08.049 [2024-07-23 00:22:22.680102] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:18:08.049 [2024-07-23 00:22:22.680112] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:08.049 [2024-07-23 00:22:22.680122] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:18:08.050 [2024-07-23 00:22:22.680132] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.297 ms 00:18:08.050 [2024-07-23 00:22:22.680145] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.050 [2024-07-23 00:22:22.680211] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:08.050 [2024-07-23 00:22:22.680221] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:18:08.050 [2024-07-23 00:22:22.680240] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:18:08.050 [2024-07-23 00:22:22.680249] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.050 [2024-07-23 00:22:22.680349] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:18:08.050 [2024-07-23 00:22:22.680362] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:18:08.050 [2024-07-23 00:22:22.680373] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:08.050 [2024-07-23 00:22:22.680383] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:08.050 [2024-07-23 00:22:22.680396] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:18:08.050 [2024-07-23 00:22:22.680405] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:18:08.050 [2024-07-23 00:22:22.680415] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:18:08.050 [2024-07-23 00:22:22.680441] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:18:08.050 [2024-07-23 00:22:22.680451] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:18:08.050 [2024-07-23 00:22:22.680460] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:08.050 [2024-07-23 00:22:22.680469] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:18:08.050 [2024-07-23 00:22:22.680480] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:18:08.050 [2024-07-23 00:22:22.680489] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:08.050 [2024-07-23 00:22:22.680498] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:18:08.050 [2024-07-23 00:22:22.680508] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:18:08.050 [2024-07-23 00:22:22.680517] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:08.050 [2024-07-23 00:22:22.680528] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:18:08.050 [2024-07-23 00:22:22.680538] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:18:08.050 [2024-07-23 00:22:22.680547] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:08.050 [2024-07-23 00:22:22.680556] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:18:08.050 [2024-07-23 00:22:22.680565] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:18:08.050 [2024-07-23 00:22:22.680574] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:08.050 [2024-07-23 00:22:22.680583] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:18:08.050 [2024-07-23 00:22:22.680593] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:18:08.050 [2024-07-23 00:22:22.680601] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:08.050 [2024-07-23 00:22:22.680610] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:18:08.050 [2024-07-23 00:22:22.680619] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:18:08.050 [2024-07-23 00:22:22.680628] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:08.050 [2024-07-23 00:22:22.680637] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:18:08.050 [2024-07-23 00:22:22.680646] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:18:08.050 [2024-07-23 00:22:22.680654] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:08.050 [2024-07-23 00:22:22.680663] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:18:08.050 [2024-07-23 00:22:22.680677] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:18:08.050 [2024-07-23 00:22:22.680686] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:08.050 [2024-07-23 00:22:22.680695] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:18:08.050 [2024-07-23 00:22:22.680704] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:18:08.050 [2024-07-23 00:22:22.680713] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:08.050 [2024-07-23 00:22:22.680722] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:18:08.050 [2024-07-23 00:22:22.680731] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:18:08.050 [2024-07-23 00:22:22.680739] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:08.050 [2024-07-23 00:22:22.680748] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:18:08.050 [2024-07-23 00:22:22.680757] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:18:08.050 [2024-07-23 00:22:22.680766] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:08.050 [2024-07-23 00:22:22.680775] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:18:08.050 [2024-07-23 00:22:22.680785] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:18:08.050 [2024-07-23 00:22:22.680795] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:08.050 [2024-07-23 00:22:22.680804] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:08.050 [2024-07-23 00:22:22.680814] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:18:08.050 [2024-07-23 00:22:22.680825] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:18:08.050 [2024-07-23 00:22:22.680835] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:18:08.050 [2024-07-23 00:22:22.680844] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:18:08.050 [2024-07-23 00:22:22.680853] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:18:08.050 [2024-07-23 00:22:22.680862] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:18:08.050 [2024-07-23 00:22:22.680872] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:18:08.050 [2024-07-23 00:22:22.680884] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:08.050 [2024-07-23 00:22:22.680902] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:18:08.050 [2024-07-23 00:22:22.680912] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:18:08.050 [2024-07-23 00:22:22.680923] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:18:08.050 [2024-07-23 00:22:22.680933] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:18:08.050 [2024-07-23 00:22:22.680943] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:18:08.050 [2024-07-23 00:22:22.680953] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:18:08.050 [2024-07-23 00:22:22.680963] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:18:08.050 [2024-07-23 00:22:22.680974] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:18:08.050 [2024-07-23 00:22:22.680984] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:18:08.050 [2024-07-23 00:22:22.680997] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:18:08.050 [2024-07-23 00:22:22.681007] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:18:08.050 [2024-07-23 00:22:22.681017] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:18:08.050 [2024-07-23 00:22:22.681027] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:18:08.050 [2024-07-23 00:22:22.681037] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:18:08.050 [2024-07-23 00:22:22.681055] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:18:08.050 [2024-07-23 00:22:22.681066] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:08.050 [2024-07-23 00:22:22.681076] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:18:08.050 [2024-07-23 00:22:22.681087] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:18:08.050 [2024-07-23 00:22:22.681106] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:18:08.050 [2024-07-23 00:22:22.681117] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:18:08.050 [2024-07-23 00:22:22.681131] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:08.050 [2024-07-23 00:22:22.681147] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:18:08.050 [2024-07-23 00:22:22.681158] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.838 ms 00:18:08.050 [2024-07-23 00:22:22.681177] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.050 [2024-07-23 00:22:22.700614] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:08.050 [2024-07-23 00:22:22.700654] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:08.050 [2024-07-23 00:22:22.700668] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.425 ms 00:18:08.050 [2024-07-23 00:22:22.700678] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.050 [2024-07-23 00:22:22.700753] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:08.050 [2024-07-23 00:22:22.700763] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:18:08.050 [2024-07-23 00:22:22.700774] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:18:08.050 [2024-07-23 00:22:22.700787] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.050 [2024-07-23 00:22:22.712012] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:08.050 [2024-07-23 00:22:22.712044] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:08.050 [2024-07-23 00:22:22.712058] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.187 ms 00:18:08.050 [2024-07-23 00:22:22.712069] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.050 [2024-07-23 00:22:22.712105] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:08.050 [2024-07-23 00:22:22.712116] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:08.051 [2024-07-23 00:22:22.712127] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:18:08.051 [2024-07-23 00:22:22.712141] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.051 [2024-07-23 00:22:22.712605] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:08.051 [2024-07-23 00:22:22.712619] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:08.051 [2024-07-23 00:22:22.712638] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.417 ms 00:18:08.051 [2024-07-23 00:22:22.712648] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.051 [2024-07-23 00:22:22.712759] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:08.051 [2024-07-23 00:22:22.712774] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:08.051 [2024-07-23 00:22:22.712791] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.093 ms 00:18:08.051 [2024-07-23 00:22:22.712801] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.051 [2024-07-23 00:22:22.718624] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:08.051 [2024-07-23 00:22:22.718654] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:08.051 [2024-07-23 00:22:22.718667] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.799 ms 00:18:08.051 [2024-07-23 00:22:22.718677] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.051 [2024-07-23 00:22:22.721137] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:18:08.051 [2024-07-23 00:22:22.721171] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:18:08.051 [2024-07-23 00:22:22.721190] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:08.051 [2024-07-23 00:22:22.721201] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:18:08.051 [2024-07-23 00:22:22.721212] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.413 ms 00:18:08.051 [2024-07-23 00:22:22.721221] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.309 [2024-07-23 00:22:22.733850] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:08.309 [2024-07-23 00:22:22.733885] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:18:08.309 [2024-07-23 00:22:22.733908] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.610 ms 00:18:08.309 [2024-07-23 00:22:22.733919] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.310 [2024-07-23 00:22:22.735622] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:08.310 [2024-07-23 00:22:22.735654] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:18:08.310 [2024-07-23 00:22:22.735667] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.653 ms 00:18:08.310 [2024-07-23 00:22:22.735677] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.310 [2024-07-23 00:22:22.737055] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:08.310 [2024-07-23 00:22:22.737086] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:18:08.310 [2024-07-23 00:22:22.737098] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.346 ms 00:18:08.310 [2024-07-23 00:22:22.737107] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.310 [2024-07-23 00:22:22.737397] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:08.310 [2024-07-23 00:22:22.737414] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:18:08.310 [2024-07-23 00:22:22.737425] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.229 ms 00:18:08.310 [2024-07-23 00:22:22.737435] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.310 [2024-07-23 00:22:22.757306] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:08.310 [2024-07-23 00:22:22.757359] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:18:08.310 [2024-07-23 00:22:22.757376] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.875 ms 00:18:08.310 [2024-07-23 00:22:22.757394] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.310 [2024-07-23 00:22:22.763484] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:18:08.310 [2024-07-23 00:22:22.766089] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:08.310 [2024-07-23 00:22:22.766128] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:18:08.310 [2024-07-23 00:22:22.766141] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.664 ms 00:18:08.310 [2024-07-23 00:22:22.766175] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.310 [2024-07-23 00:22:22.766227] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:08.310 [2024-07-23 00:22:22.766239] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:18:08.310 [2024-07-23 00:22:22.766257] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:18:08.310 [2024-07-23 00:22:22.766267] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.310 [2024-07-23 00:22:22.766358] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:08.310 [2024-07-23 00:22:22.766389] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:18:08.310 [2024-07-23 00:22:22.766409] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:18:08.310 [2024-07-23 00:22:22.766420] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.310 [2024-07-23 00:22:22.766444] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:08.310 [2024-07-23 00:22:22.766455] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:18:08.310 [2024-07-23 00:22:22.766464] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:18:08.310 [2024-07-23 00:22:22.766481] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.310 [2024-07-23 00:22:22.766513] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:18:08.310 [2024-07-23 00:22:22.766525] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:08.310 [2024-07-23 00:22:22.766535] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:18:08.310 [2024-07-23 00:22:22.766558] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:18:08.310 [2024-07-23 00:22:22.766568] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.310 [2024-07-23 00:22:22.770133] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:08.310 [2024-07-23 00:22:22.770168] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:18:08.310 [2024-07-23 00:22:22.770181] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.551 ms 00:18:08.310 [2024-07-23 00:22:22.770191] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.310 [2024-07-23 00:22:22.770253] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:08.310 [2024-07-23 00:22:22.770281] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:18:08.310 [2024-07-23 00:22:22.770301] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:18:08.310 [2024-07-23 00:22:22.770322] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.310 [2024-07-23 00:22:22.771386] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 104.310 ms, result 0 00:18:46.132  Copying: 26/1024 [MB] (26 MBps) Copying: 54/1024 [MB] (27 MBps) Copying: 81/1024 [MB] (27 MBps) Copying: 108/1024 [MB] (27 MBps) Copying: 135/1024 [MB] (27 MBps) Copying: 162/1024 [MB] (27 MBps) Copying: 190/1024 [MB] (27 MBps) Copying: 217/1024 [MB] (27 MBps) Copying: 245/1024 [MB] (27 MBps) Copying: 273/1024 [MB] (28 MBps) Copying: 300/1024 [MB] (27 MBps) Copying: 327/1024 [MB] (26 MBps) Copying: 354/1024 [MB] (26 MBps) Copying: 380/1024 [MB] (26 MBps) Copying: 407/1024 [MB] (26 MBps) Copying: 433/1024 [MB] (26 MBps) Copying: 460/1024 [MB] (26 MBps) Copying: 487/1024 [MB] (26 MBps) Copying: 514/1024 [MB] (27 MBps) Copying: 541/1024 [MB] (26 MBps) Copying: 569/1024 [MB] (27 MBps) Copying: 596/1024 [MB] (27 MBps) Copying: 623/1024 [MB] (27 MBps) Copying: 650/1024 [MB] (27 MBps) Copying: 678/1024 [MB] (27 MBps) Copying: 705/1024 [MB] (27 MBps) Copying: 732/1024 [MB] (27 MBps) Copying: 760/1024 [MB] (27 MBps) Copying: 787/1024 [MB] (27 MBps) Copying: 815/1024 [MB] (27 MBps) Copying: 842/1024 [MB] (27 MBps) Copying: 869/1024 [MB] (27 MBps) Copying: 897/1024 [MB] (27 MBps) Copying: 924/1024 [MB] (27 MBps) Copying: 954/1024 [MB] (30 MBps) Copying: 982/1024 [MB] (27 MBps) Copying: 1009/1024 [MB] (27 MBps) Copying: 1024/1024 [MB] (average 27 MBps)[2024-07-23 00:23:00.557465] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:46.132 [2024-07-23 00:23:00.557566] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:18:46.132 [2024-07-23 00:23:00.557614] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:18:46.132 [2024-07-23 00:23:00.557636] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.132 [2024-07-23 00:23:00.557679] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:46.132 [2024-07-23 00:23:00.558799] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:46.132 [2024-07-23 00:23:00.559047] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:18:46.132 [2024-07-23 00:23:00.559220] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.086 ms 00:18:46.132 [2024-07-23 00:23:00.559412] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.132 [2024-07-23 00:23:00.559871] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:46.132 [2024-07-23 00:23:00.559947] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:18:46.132 [2024-07-23 00:23:00.559993] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.329 ms 00:18:46.132 [2024-07-23 00:23:00.560022] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.132 [2024-07-23 00:23:00.565885] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:46.132 [2024-07-23 00:23:00.565921] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:18:46.132 [2024-07-23 00:23:00.565938] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.827 ms 00:18:46.132 [2024-07-23 00:23:00.565953] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.132 [2024-07-23 00:23:00.573711] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:46.132 [2024-07-23 00:23:00.573752] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:18:46.132 [2024-07-23 00:23:00.573768] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.741 ms 00:18:46.132 [2024-07-23 00:23:00.574113] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.132 [2024-07-23 00:23:00.575719] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:46.132 [2024-07-23 00:23:00.575757] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:18:46.132 [2024-07-23 00:23:00.575770] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.531 ms 00:18:46.132 [2024-07-23 00:23:00.575780] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.132 [2024-07-23 00:23:00.579678] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:46.132 [2024-07-23 00:23:00.579716] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:18:46.132 [2024-07-23 00:23:00.579728] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.872 ms 00:18:46.132 [2024-07-23 00:23:00.579738] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.132 [2024-07-23 00:23:00.579843] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:46.132 [2024-07-23 00:23:00.579856] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:18:46.132 [2024-07-23 00:23:00.579867] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.074 ms 00:18:46.132 [2024-07-23 00:23:00.579888] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.132 [2024-07-23 00:23:00.581743] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:46.132 [2024-07-23 00:23:00.581776] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:18:46.132 [2024-07-23 00:23:00.581788] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.840 ms 00:18:46.132 [2024-07-23 00:23:00.581798] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.132 [2024-07-23 00:23:00.583176] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:46.132 [2024-07-23 00:23:00.583210] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:18:46.132 [2024-07-23 00:23:00.583222] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.352 ms 00:18:46.132 [2024-07-23 00:23:00.583231] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.132 [2024-07-23 00:23:00.584384] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:46.132 [2024-07-23 00:23:00.584415] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:18:46.132 [2024-07-23 00:23:00.584427] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.127 ms 00:18:46.132 [2024-07-23 00:23:00.584436] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.132 [2024-07-23 00:23:00.585560] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:46.132 [2024-07-23 00:23:00.585591] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:18:46.132 [2024-07-23 00:23:00.585603] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.074 ms 00:18:46.132 [2024-07-23 00:23:00.585613] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.132 [2024-07-23 00:23:00.585641] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:18:46.132 [2024-07-23 00:23:00.585657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:18:46.132 [2024-07-23 00:23:00.585670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:18:46.132 [2024-07-23 00:23:00.585681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:18:46.132 [2024-07-23 00:23:00.585691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:18:46.132 [2024-07-23 00:23:00.585702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:18:46.132 [2024-07-23 00:23:00.585713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:18:46.132 [2024-07-23 00:23:00.585724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:18:46.132 [2024-07-23 00:23:00.585734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:18:46.133 [2024-07-23 00:23:00.585744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:18:46.133 [2024-07-23 00:23:00.585755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:18:46.133 [2024-07-23 00:23:00.585766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:18:46.133 [2024-07-23 00:23:00.585776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:18:46.133 [2024-07-23 00:23:00.585786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:18:46.133 [2024-07-23 00:23:00.585796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:18:46.133 [2024-07-23 00:23:00.585807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:18:46.133 [2024-07-23 00:23:00.585817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:18:46.133 [2024-07-23 00:23:00.585828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:18:46.133 [2024-07-23 00:23:00.585838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:18:46.133 [2024-07-23 00:23:00.585848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:18:46.133 [2024-07-23 00:23:00.585858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:18:46.133 [2024-07-23 00:23:00.585869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:18:46.133 [2024-07-23 00:23:00.585879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:18:46.133 [2024-07-23 00:23:00.585889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:18:46.133 [2024-07-23 00:23:00.585899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:18:46.133 [2024-07-23 00:23:00.585910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:18:46.133 [2024-07-23 00:23:00.585920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:18:46.133 [2024-07-23 00:23:00.585932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:18:46.133 [2024-07-23 00:23:00.585943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:18:46.133 [2024-07-23 00:23:00.585953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:18:46.133 [2024-07-23 00:23:00.585964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:18:46.133 [2024-07-23 00:23:00.585975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:18:46.133 [2024-07-23 00:23:00.585986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:18:46.133 [2024-07-23 00:23:00.585997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:18:46.133 [2024-07-23 00:23:00.586007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:18:46.133 [2024-07-23 00:23:00.586018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:18:46.133 [2024-07-23 00:23:00.586028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:18:46.133 [2024-07-23 00:23:00.586039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:18:46.133 [2024-07-23 00:23:00.586049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:18:46.133 [2024-07-23 00:23:00.586060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:18:46.133 [2024-07-23 00:23:00.586071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:18:46.133 [2024-07-23 00:23:00.586081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:18:46.133 [2024-07-23 00:23:00.586092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:18:46.133 [2024-07-23 00:23:00.586102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:18:46.133 [2024-07-23 00:23:00.586113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:18:46.133 [2024-07-23 00:23:00.586124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:18:46.133 [2024-07-23 00:23:00.586134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:18:46.133 [2024-07-23 00:23:00.586145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:18:46.133 [2024-07-23 00:23:00.586155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:18:46.133 [2024-07-23 00:23:00.586165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:18:46.133 [2024-07-23 00:23:00.586176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:18:46.133 [2024-07-23 00:23:00.586186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:18:46.133 [2024-07-23 00:23:00.586196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:18:46.133 [2024-07-23 00:23:00.586206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:18:46.133 [2024-07-23 00:23:00.586217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:18:46.133 [2024-07-23 00:23:00.586227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:18:46.133 [2024-07-23 00:23:00.586251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:18:46.133 [2024-07-23 00:23:00.586275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:18:46.133 [2024-07-23 00:23:00.586287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:18:46.133 [2024-07-23 00:23:00.586298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:18:46.133 [2024-07-23 00:23:00.586308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:18:46.133 [2024-07-23 00:23:00.586319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:18:46.133 [2024-07-23 00:23:00.586330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:18:46.133 [2024-07-23 00:23:00.586341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:18:46.133 [2024-07-23 00:23:00.586352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:18:46.133 [2024-07-23 00:23:00.586363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:18:46.133 [2024-07-23 00:23:00.586373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:18:46.133 [2024-07-23 00:23:00.586384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:18:46.133 [2024-07-23 00:23:00.586394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:18:46.133 [2024-07-23 00:23:00.586405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:18:46.133 [2024-07-23 00:23:00.586416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:18:46.133 [2024-07-23 00:23:00.586426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:18:46.133 [2024-07-23 00:23:00.586437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:18:46.133 [2024-07-23 00:23:00.586447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:18:46.133 [2024-07-23 00:23:00.586457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:18:46.133 [2024-07-23 00:23:00.586468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:18:46.133 [2024-07-23 00:23:00.586478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:18:46.133 [2024-07-23 00:23:00.586489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:18:46.133 [2024-07-23 00:23:00.586500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:18:46.133 [2024-07-23 00:23:00.586510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:18:46.133 [2024-07-23 00:23:00.586521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:18:46.133 [2024-07-23 00:23:00.586531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:18:46.133 [2024-07-23 00:23:00.586542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:18:46.133 [2024-07-23 00:23:00.586552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:18:46.133 [2024-07-23 00:23:00.586562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:18:46.133 [2024-07-23 00:23:00.586573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:18:46.133 [2024-07-23 00:23:00.586583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:18:46.133 [2024-07-23 00:23:00.586593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:18:46.133 [2024-07-23 00:23:00.586604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:18:46.133 [2024-07-23 00:23:00.586614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:18:46.133 [2024-07-23 00:23:00.586624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:18:46.133 [2024-07-23 00:23:00.586635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:18:46.133 [2024-07-23 00:23:00.586645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:18:46.133 [2024-07-23 00:23:00.586655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:18:46.133 [2024-07-23 00:23:00.586666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:18:46.133 [2024-07-23 00:23:00.586676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:18:46.133 [2024-07-23 00:23:00.586687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:18:46.133 [2024-07-23 00:23:00.586697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:18:46.134 [2024-07-23 00:23:00.586708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:18:46.134 [2024-07-23 00:23:00.586718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:18:46.134 [2024-07-23 00:23:00.586729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:18:46.134 [2024-07-23 00:23:00.586747] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:18:46.134 [2024-07-23 00:23:00.586756] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 7c24566a-36b9-43b7-8fcf-8e163863318c 00:18:46.134 [2024-07-23 00:23:00.586767] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:18:46.134 [2024-07-23 00:23:00.586777] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:18:46.134 [2024-07-23 00:23:00.586786] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:18:46.134 [2024-07-23 00:23:00.586796] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:18:46.134 [2024-07-23 00:23:00.586813] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:18:46.134 [2024-07-23 00:23:00.586823] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:18:46.134 [2024-07-23 00:23:00.586840] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:18:46.134 [2024-07-23 00:23:00.586849] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:18:46.134 [2024-07-23 00:23:00.586858] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:18:46.134 [2024-07-23 00:23:00.586867] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:46.134 [2024-07-23 00:23:00.586883] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:18:46.134 [2024-07-23 00:23:00.586894] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.230 ms 00:18:46.134 [2024-07-23 00:23:00.586904] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.134 [2024-07-23 00:23:00.588572] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:46.134 [2024-07-23 00:23:00.588593] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:18:46.134 [2024-07-23 00:23:00.588604] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.654 ms 00:18:46.134 [2024-07-23 00:23:00.588614] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.134 [2024-07-23 00:23:00.588727] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:46.134 [2024-07-23 00:23:00.588740] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:18:46.134 [2024-07-23 00:23:00.588750] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.088 ms 00:18:46.134 [2024-07-23 00:23:00.588760] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.134 [2024-07-23 00:23:00.594716] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:46.134 [2024-07-23 00:23:00.594737] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:46.134 [2024-07-23 00:23:00.594748] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:46.134 [2024-07-23 00:23:00.594764] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.134 [2024-07-23 00:23:00.594817] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:46.134 [2024-07-23 00:23:00.594827] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:46.134 [2024-07-23 00:23:00.594836] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:46.134 [2024-07-23 00:23:00.594845] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.134 [2024-07-23 00:23:00.594905] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:46.134 [2024-07-23 00:23:00.594917] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:46.134 [2024-07-23 00:23:00.594927] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:46.134 [2024-07-23 00:23:00.594936] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.134 [2024-07-23 00:23:00.594956] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:46.134 [2024-07-23 00:23:00.594966] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:46.134 [2024-07-23 00:23:00.594975] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:46.134 [2024-07-23 00:23:00.594985] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.134 [2024-07-23 00:23:00.606117] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:46.134 [2024-07-23 00:23:00.606165] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:46.134 [2024-07-23 00:23:00.606178] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:46.134 [2024-07-23 00:23:00.606204] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.134 [2024-07-23 00:23:00.614464] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:46.134 [2024-07-23 00:23:00.614498] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:46.134 [2024-07-23 00:23:00.614510] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:46.134 [2024-07-23 00:23:00.614520] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.134 [2024-07-23 00:23:00.614567] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:46.134 [2024-07-23 00:23:00.614578] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:46.134 [2024-07-23 00:23:00.614589] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:46.134 [2024-07-23 00:23:00.614598] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.134 [2024-07-23 00:23:00.614623] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:46.134 [2024-07-23 00:23:00.614640] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:46.134 [2024-07-23 00:23:00.614651] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:46.134 [2024-07-23 00:23:00.614661] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.134 [2024-07-23 00:23:00.614732] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:46.134 [2024-07-23 00:23:00.614744] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:46.134 [2024-07-23 00:23:00.614754] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:46.134 [2024-07-23 00:23:00.614764] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.134 [2024-07-23 00:23:00.614796] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:46.134 [2024-07-23 00:23:00.614808] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:18:46.134 [2024-07-23 00:23:00.614821] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:46.134 [2024-07-23 00:23:00.614838] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.134 [2024-07-23 00:23:00.614874] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:46.134 [2024-07-23 00:23:00.614885] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:46.134 [2024-07-23 00:23:00.614895] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:46.134 [2024-07-23 00:23:00.614905] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.134 [2024-07-23 00:23:00.614953] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:46.134 [2024-07-23 00:23:00.614974] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:46.134 [2024-07-23 00:23:00.614983] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:46.134 [2024-07-23 00:23:00.614993] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.134 [2024-07-23 00:23:00.615102] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 57.726 ms, result 0 00:18:46.393 00:18:46.393 00:18:46.393 00:23:00 ftl.ftl_restore -- ftl/restore.sh@76 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:18:48.295 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:18:48.295 00:23:02 ftl.ftl_restore -- ftl/restore.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --seek=131072 00:18:48.295 [2024-07-23 00:23:02.628674] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:18:48.295 [2024-07-23 00:23:02.628816] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90451 ] 00:18:48.295 [2024-07-23 00:23:02.778348] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:48.295 [2024-07-23 00:23:02.819808] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:48.295 [2024-07-23 00:23:02.921149] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:48.295 [2024-07-23 00:23:02.921219] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:48.554 [2024-07-23 00:23:03.071800] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:48.554 [2024-07-23 00:23:03.071856] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:18:48.554 [2024-07-23 00:23:03.071872] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:18:48.554 [2024-07-23 00:23:03.071882] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:48.554 [2024-07-23 00:23:03.071941] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:48.554 [2024-07-23 00:23:03.071959] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:48.554 [2024-07-23 00:23:03.071976] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:18:48.554 [2024-07-23 00:23:03.071989] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:48.554 [2024-07-23 00:23:03.072016] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:18:48.554 [2024-07-23 00:23:03.072235] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:18:48.554 [2024-07-23 00:23:03.072254] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:48.554 [2024-07-23 00:23:03.072284] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:48.554 [2024-07-23 00:23:03.072295] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.242 ms 00:18:48.554 [2024-07-23 00:23:03.072304] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:48.554 [2024-07-23 00:23:03.073707] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:18:48.554 [2024-07-23 00:23:03.076106] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:48.554 [2024-07-23 00:23:03.076141] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:18:48.554 [2024-07-23 00:23:03.076158] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.403 ms 00:18:48.554 [2024-07-23 00:23:03.076168] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:48.554 [2024-07-23 00:23:03.076225] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:48.554 [2024-07-23 00:23:03.076238] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:18:48.554 [2024-07-23 00:23:03.076249] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:18:48.554 [2024-07-23 00:23:03.076259] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:48.554 [2024-07-23 00:23:03.082886] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:48.554 [2024-07-23 00:23:03.082915] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:48.554 [2024-07-23 00:23:03.082927] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.569 ms 00:18:48.554 [2024-07-23 00:23:03.082946] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:48.554 [2024-07-23 00:23:03.083034] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:48.554 [2024-07-23 00:23:03.083047] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:48.554 [2024-07-23 00:23:03.083058] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:18:48.554 [2024-07-23 00:23:03.083068] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:48.554 [2024-07-23 00:23:03.083128] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:48.554 [2024-07-23 00:23:03.083140] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:18:48.554 [2024-07-23 00:23:03.083159] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:18:48.554 [2024-07-23 00:23:03.083168] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:48.554 [2024-07-23 00:23:03.083195] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:48.554 [2024-07-23 00:23:03.084799] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:48.554 [2024-07-23 00:23:03.084833] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:48.554 [2024-07-23 00:23:03.084852] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.615 ms 00:18:48.554 [2024-07-23 00:23:03.084862] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:48.554 [2024-07-23 00:23:03.084905] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:48.554 [2024-07-23 00:23:03.084915] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:18:48.554 [2024-07-23 00:23:03.084929] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:18:48.554 [2024-07-23 00:23:03.084938] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:48.554 [2024-07-23 00:23:03.084961] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:18:48.554 [2024-07-23 00:23:03.084983] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:18:48.554 [2024-07-23 00:23:03.085023] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:18:48.554 [2024-07-23 00:23:03.085053] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:18:48.554 [2024-07-23 00:23:03.085137] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:18:48.554 [2024-07-23 00:23:03.085153] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:18:48.554 [2024-07-23 00:23:03.085169] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:18:48.554 [2024-07-23 00:23:03.085182] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:18:48.554 [2024-07-23 00:23:03.085201] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:18:48.554 [2024-07-23 00:23:03.085212] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:18:48.554 [2024-07-23 00:23:03.085221] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:18:48.555 [2024-07-23 00:23:03.085231] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:18:48.555 [2024-07-23 00:23:03.085240] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:18:48.555 [2024-07-23 00:23:03.085251] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:48.555 [2024-07-23 00:23:03.085277] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:18:48.555 [2024-07-23 00:23:03.085288] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.292 ms 00:18:48.555 [2024-07-23 00:23:03.085301] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:48.555 [2024-07-23 00:23:03.085376] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:48.555 [2024-07-23 00:23:03.085393] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:18:48.555 [2024-07-23 00:23:03.085403] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:18:48.555 [2024-07-23 00:23:03.085411] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:48.555 [2024-07-23 00:23:03.085497] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:18:48.555 [2024-07-23 00:23:03.085508] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:18:48.555 [2024-07-23 00:23:03.085519] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:48.555 [2024-07-23 00:23:03.085535] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:48.555 [2024-07-23 00:23:03.085548] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:18:48.555 [2024-07-23 00:23:03.085558] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:18:48.555 [2024-07-23 00:23:03.085567] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:18:48.555 [2024-07-23 00:23:03.085576] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:18:48.555 [2024-07-23 00:23:03.085585] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:18:48.555 [2024-07-23 00:23:03.085594] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:48.555 [2024-07-23 00:23:03.085603] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:18:48.555 [2024-07-23 00:23:03.085612] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:18:48.555 [2024-07-23 00:23:03.085621] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:48.555 [2024-07-23 00:23:03.085630] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:18:48.555 [2024-07-23 00:23:03.085639] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:18:48.555 [2024-07-23 00:23:03.085648] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:48.555 [2024-07-23 00:23:03.085660] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:18:48.555 [2024-07-23 00:23:03.085670] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:18:48.555 [2024-07-23 00:23:03.085679] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:48.555 [2024-07-23 00:23:03.085688] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:18:48.555 [2024-07-23 00:23:03.085697] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:18:48.555 [2024-07-23 00:23:03.085706] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:48.555 [2024-07-23 00:23:03.085715] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:18:48.555 [2024-07-23 00:23:03.085724] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:18:48.555 [2024-07-23 00:23:03.085733] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:48.555 [2024-07-23 00:23:03.085741] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:18:48.555 [2024-07-23 00:23:03.085750] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:18:48.555 [2024-07-23 00:23:03.085759] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:48.555 [2024-07-23 00:23:03.085768] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:18:48.555 [2024-07-23 00:23:03.085777] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:18:48.555 [2024-07-23 00:23:03.085786] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:48.555 [2024-07-23 00:23:03.085794] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:18:48.555 [2024-07-23 00:23:03.085808] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:18:48.555 [2024-07-23 00:23:03.085818] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:48.555 [2024-07-23 00:23:03.085826] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:18:48.555 [2024-07-23 00:23:03.085835] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:18:48.555 [2024-07-23 00:23:03.085844] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:48.555 [2024-07-23 00:23:03.085853] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:18:48.555 [2024-07-23 00:23:03.085862] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:18:48.555 [2024-07-23 00:23:03.085870] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:48.555 [2024-07-23 00:23:03.085879] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:18:48.555 [2024-07-23 00:23:03.085888] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:18:48.555 [2024-07-23 00:23:03.085897] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:48.555 [2024-07-23 00:23:03.085905] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:18:48.555 [2024-07-23 00:23:03.085922] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:18:48.555 [2024-07-23 00:23:03.085931] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:48.555 [2024-07-23 00:23:03.085941] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:48.555 [2024-07-23 00:23:03.085957] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:18:48.555 [2024-07-23 00:23:03.085969] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:18:48.555 [2024-07-23 00:23:03.085979] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:18:48.555 [2024-07-23 00:23:03.085988] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:18:48.555 [2024-07-23 00:23:03.085996] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:18:48.555 [2024-07-23 00:23:03.086006] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:18:48.555 [2024-07-23 00:23:03.086016] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:18:48.555 [2024-07-23 00:23:03.086027] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:48.555 [2024-07-23 00:23:03.086038] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:18:48.555 [2024-07-23 00:23:03.086048] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:18:48.555 [2024-07-23 00:23:03.086058] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:18:48.555 [2024-07-23 00:23:03.086068] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:18:48.555 [2024-07-23 00:23:03.086078] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:18:48.555 [2024-07-23 00:23:03.086088] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:18:48.555 [2024-07-23 00:23:03.086098] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:18:48.555 [2024-07-23 00:23:03.086107] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:18:48.555 [2024-07-23 00:23:03.086117] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:18:48.555 [2024-07-23 00:23:03.086129] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:18:48.555 [2024-07-23 00:23:03.086140] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:18:48.555 [2024-07-23 00:23:03.086149] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:18:48.555 [2024-07-23 00:23:03.086159] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:18:48.555 [2024-07-23 00:23:03.086169] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:18:48.555 [2024-07-23 00:23:03.086179] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:18:48.555 [2024-07-23 00:23:03.086197] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:48.555 [2024-07-23 00:23:03.086207] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:18:48.555 [2024-07-23 00:23:03.086217] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:18:48.555 [2024-07-23 00:23:03.086236] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:18:48.555 [2024-07-23 00:23:03.086246] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:18:48.555 [2024-07-23 00:23:03.086257] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:48.555 [2024-07-23 00:23:03.086282] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:18:48.555 [2024-07-23 00:23:03.086292] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.815 ms 00:18:48.555 [2024-07-23 00:23:03.086305] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:48.555 [2024-07-23 00:23:03.109702] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:48.555 [2024-07-23 00:23:03.109741] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:48.555 [2024-07-23 00:23:03.109759] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.386 ms 00:18:48.555 [2024-07-23 00:23:03.109772] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:48.555 [2024-07-23 00:23:03.109884] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:48.555 [2024-07-23 00:23:03.109900] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:18:48.555 [2024-07-23 00:23:03.109915] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:18:48.555 [2024-07-23 00:23:03.109933] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:48.555 [2024-07-23 00:23:03.120806] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:48.556 [2024-07-23 00:23:03.120843] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:48.556 [2024-07-23 00:23:03.120857] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.816 ms 00:18:48.556 [2024-07-23 00:23:03.120867] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:48.556 [2024-07-23 00:23:03.120904] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:48.556 [2024-07-23 00:23:03.120915] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:48.556 [2024-07-23 00:23:03.120926] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:18:48.556 [2024-07-23 00:23:03.120940] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:48.556 [2024-07-23 00:23:03.121424] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:48.556 [2024-07-23 00:23:03.121444] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:48.556 [2024-07-23 00:23:03.121456] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.436 ms 00:18:48.556 [2024-07-23 00:23:03.121466] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:48.556 [2024-07-23 00:23:03.121581] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:48.556 [2024-07-23 00:23:03.121596] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:48.556 [2024-07-23 00:23:03.121607] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.097 ms 00:18:48.556 [2024-07-23 00:23:03.121616] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:48.556 [2024-07-23 00:23:03.127453] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:48.556 [2024-07-23 00:23:03.127486] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:48.556 [2024-07-23 00:23:03.127499] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.821 ms 00:18:48.556 [2024-07-23 00:23:03.127517] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:48.556 [2024-07-23 00:23:03.130207] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:18:48.556 [2024-07-23 00:23:03.130245] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:18:48.556 [2024-07-23 00:23:03.130276] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:48.556 [2024-07-23 00:23:03.130288] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:18:48.556 [2024-07-23 00:23:03.130301] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.677 ms 00:18:48.556 [2024-07-23 00:23:03.130311] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:48.556 [2024-07-23 00:23:03.142985] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:48.556 [2024-07-23 00:23:03.143023] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:18:48.556 [2024-07-23 00:23:03.143037] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.652 ms 00:18:48.556 [2024-07-23 00:23:03.143047] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:48.556 [2024-07-23 00:23:03.144729] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:48.556 [2024-07-23 00:23:03.144763] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:18:48.556 [2024-07-23 00:23:03.144775] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.614 ms 00:18:48.556 [2024-07-23 00:23:03.144784] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:48.556 [2024-07-23 00:23:03.146119] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:48.556 [2024-07-23 00:23:03.146152] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:18:48.556 [2024-07-23 00:23:03.146163] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.302 ms 00:18:48.556 [2024-07-23 00:23:03.146172] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:48.556 [2024-07-23 00:23:03.146462] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:48.556 [2024-07-23 00:23:03.146481] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:18:48.556 [2024-07-23 00:23:03.146499] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.232 ms 00:18:48.556 [2024-07-23 00:23:03.146509] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:48.556 [2024-07-23 00:23:03.166478] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:48.556 [2024-07-23 00:23:03.166566] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:18:48.556 [2024-07-23 00:23:03.166584] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.973 ms 00:18:48.556 [2024-07-23 00:23:03.166603] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:48.556 [2024-07-23 00:23:03.172770] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:18:48.556 [2024-07-23 00:23:03.175303] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:48.556 [2024-07-23 00:23:03.175331] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:18:48.556 [2024-07-23 00:23:03.175360] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.637 ms 00:18:48.556 [2024-07-23 00:23:03.175378] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:48.556 [2024-07-23 00:23:03.175437] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:48.556 [2024-07-23 00:23:03.175448] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:18:48.556 [2024-07-23 00:23:03.175459] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:18:48.556 [2024-07-23 00:23:03.175469] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:48.556 [2024-07-23 00:23:03.175545] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:48.556 [2024-07-23 00:23:03.175560] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:18:48.556 [2024-07-23 00:23:03.175574] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:18:48.556 [2024-07-23 00:23:03.175584] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:48.556 [2024-07-23 00:23:03.175620] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:48.556 [2024-07-23 00:23:03.175630] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:18:48.556 [2024-07-23 00:23:03.175640] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:18:48.556 [2024-07-23 00:23:03.175650] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:48.556 [2024-07-23 00:23:03.175682] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:18:48.556 [2024-07-23 00:23:03.175694] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:48.556 [2024-07-23 00:23:03.175703] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:18:48.556 [2024-07-23 00:23:03.175726] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:18:48.556 [2024-07-23 00:23:03.175736] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:48.556 [2024-07-23 00:23:03.179239] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:48.556 [2024-07-23 00:23:03.179291] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:18:48.556 [2024-07-23 00:23:03.179304] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.490 ms 00:18:48.556 [2024-07-23 00:23:03.179314] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:48.556 [2024-07-23 00:23:03.179376] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:48.556 [2024-07-23 00:23:03.179387] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:18:48.556 [2024-07-23 00:23:03.179411] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:18:48.556 [2024-07-23 00:23:03.179425] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:48.556 [2024-07-23 00:23:03.180476] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 108.449 ms, result 0 00:19:28.773  Copying: 25/1024 [MB] (25 MBps) Copying: 52/1024 [MB] (26 MBps) Copying: 78/1024 [MB] (25 MBps) Copying: 103/1024 [MB] (25 MBps) Copying: 129/1024 [MB] (25 MBps) Copying: 156/1024 [MB] (26 MBps) Copying: 182/1024 [MB] (25 MBps) Copying: 208/1024 [MB] (26 MBps) Copying: 234/1024 [MB] (26 MBps) Copying: 260/1024 [MB] (26 MBps) Copying: 287/1024 [MB] (26 MBps) Copying: 313/1024 [MB] (26 MBps) Copying: 339/1024 [MB] (26 MBps) Copying: 366/1024 [MB] (26 MBps) Copying: 392/1024 [MB] (26 MBps) Copying: 418/1024 [MB] (26 MBps) Copying: 443/1024 [MB] (25 MBps) Copying: 469/1024 [MB] (25 MBps) Copying: 495/1024 [MB] (25 MBps) Copying: 521/1024 [MB] (25 MBps) Copying: 547/1024 [MB] (26 MBps) Copying: 573/1024 [MB] (26 MBps) Copying: 599/1024 [MB] (25 MBps) Copying: 625/1024 [MB] (25 MBps) Copying: 651/1024 [MB] (25 MBps) Copying: 677/1024 [MB] (26 MBps) Copying: 703/1024 [MB] (26 MBps) Copying: 729/1024 [MB] (26 MBps) Copying: 756/1024 [MB] (26 MBps) Copying: 781/1024 [MB] (25 MBps) Copying: 807/1024 [MB] (26 MBps) Copying: 834/1024 [MB] (26 MBps) Copying: 860/1024 [MB] (26 MBps) Copying: 886/1024 [MB] (25 MBps) Copying: 912/1024 [MB] (25 MBps) Copying: 938/1024 [MB] (25 MBps) Copying: 963/1024 [MB] (25 MBps) Copying: 989/1024 [MB] (25 MBps) Copying: 1015/1024 [MB] (25 MBps) Copying: 1048460/1048576 [kB] (8508 kBps) Copying: 1024/1024 [MB] (average 25 MBps)[2024-07-23 00:23:43.243146] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.773 [2024-07-23 00:23:43.243213] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:28.773 [2024-07-23 00:23:43.243231] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:28.773 [2024-07-23 00:23:43.243241] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.773 [2024-07-23 00:23:43.245819] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:28.773 [2024-07-23 00:23:43.248366] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.773 [2024-07-23 00:23:43.248403] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:28.773 [2024-07-23 00:23:43.248427] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.499 ms 00:19:28.773 [2024-07-23 00:23:43.248446] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.773 [2024-07-23 00:23:43.258198] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.773 [2024-07-23 00:23:43.258239] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:28.773 [2024-07-23 00:23:43.258253] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.159 ms 00:19:28.773 [2024-07-23 00:23:43.258274] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.773 [2024-07-23 00:23:43.282424] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.773 [2024-07-23 00:23:43.282465] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:28.773 [2024-07-23 00:23:43.282481] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.169 ms 00:19:28.774 [2024-07-23 00:23:43.282498] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.774 [2024-07-23 00:23:43.287557] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.774 [2024-07-23 00:23:43.287589] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:19:28.774 [2024-07-23 00:23:43.287611] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.023 ms 00:19:28.774 [2024-07-23 00:23:43.287621] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.774 [2024-07-23 00:23:43.289456] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.774 [2024-07-23 00:23:43.289491] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:28.774 [2024-07-23 00:23:43.289503] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.777 ms 00:19:28.774 [2024-07-23 00:23:43.289512] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.774 [2024-07-23 00:23:43.293131] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.774 [2024-07-23 00:23:43.293169] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:28.774 [2024-07-23 00:23:43.293181] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.597 ms 00:19:28.774 [2024-07-23 00:23:43.293197] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.774 [2024-07-23 00:23:43.417034] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.774 [2024-07-23 00:23:43.417090] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:28.774 [2024-07-23 00:23:43.417106] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 124.003 ms 00:19:28.774 [2024-07-23 00:23:43.417116] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.774 [2024-07-23 00:23:43.419679] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.774 [2024-07-23 00:23:43.419717] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:19:28.774 [2024-07-23 00:23:43.419729] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.549 ms 00:19:28.774 [2024-07-23 00:23:43.419739] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.774 [2024-07-23 00:23:43.421248] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.774 [2024-07-23 00:23:43.421293] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:19:28.774 [2024-07-23 00:23:43.421305] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.481 ms 00:19:28.774 [2024-07-23 00:23:43.421314] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.774 [2024-07-23 00:23:43.422590] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.774 [2024-07-23 00:23:43.422622] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:28.774 [2024-07-23 00:23:43.422633] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.250 ms 00:19:28.774 [2024-07-23 00:23:43.422643] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.774 [2024-07-23 00:23:43.423720] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.774 [2024-07-23 00:23:43.423754] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:28.774 [2024-07-23 00:23:43.423765] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.032 ms 00:19:28.774 [2024-07-23 00:23:43.423774] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.774 [2024-07-23 00:23:43.423800] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:28.774 [2024-07-23 00:23:43.423816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 115200 / 261120 wr_cnt: 1 state: open 00:19:28.775 [2024-07-23 00:23:43.423840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:28.775 [2024-07-23 00:23:43.423852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:28.775 [2024-07-23 00:23:43.423863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:28.775 [2024-07-23 00:23:43.423873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:28.775 [2024-07-23 00:23:43.423884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:28.775 [2024-07-23 00:23:43.423895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:28.775 [2024-07-23 00:23:43.423906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:28.775 [2024-07-23 00:23:43.423916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:28.775 [2024-07-23 00:23:43.423926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:28.775 [2024-07-23 00:23:43.423937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:28.775 [2024-07-23 00:23:43.423947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:28.775 [2024-07-23 00:23:43.423958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:28.775 [2024-07-23 00:23:43.423968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:28.775 [2024-07-23 00:23:43.423979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:28.775 [2024-07-23 00:23:43.423990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:28.775 [2024-07-23 00:23:43.424001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:28.775 [2024-07-23 00:23:43.424011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:28.775 [2024-07-23 00:23:43.424021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:28.775 [2024-07-23 00:23:43.424031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:28.775 [2024-07-23 00:23:43.424042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:28.775 [2024-07-23 00:23:43.424052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:28.775 [2024-07-23 00:23:43.424063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:28.775 [2024-07-23 00:23:43.424073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:28.775 [2024-07-23 00:23:43.424084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:28.775 [2024-07-23 00:23:43.424094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:28.775 [2024-07-23 00:23:43.424104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:28.775 [2024-07-23 00:23:43.424115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:28.775 [2024-07-23 00:23:43.424125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:28.776 [2024-07-23 00:23:43.424135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:28.776 [2024-07-23 00:23:43.424145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:28.776 [2024-07-23 00:23:43.424155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:28.776 [2024-07-23 00:23:43.424166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:28.776 [2024-07-23 00:23:43.424177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:28.776 [2024-07-23 00:23:43.424188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:28.776 [2024-07-23 00:23:43.424199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:28.776 [2024-07-23 00:23:43.424209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:28.776 [2024-07-23 00:23:43.424220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:28.776 [2024-07-23 00:23:43.424231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:28.776 [2024-07-23 00:23:43.424241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:28.776 [2024-07-23 00:23:43.424251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:28.776 [2024-07-23 00:23:43.424272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:28.776 [2024-07-23 00:23:43.424283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:28.776 [2024-07-23 00:23:43.424294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:28.776 [2024-07-23 00:23:43.424305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:28.776 [2024-07-23 00:23:43.424315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:28.776 [2024-07-23 00:23:43.424326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:28.776 [2024-07-23 00:23:43.424336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:28.776 [2024-07-23 00:23:43.424347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:28.776 [2024-07-23 00:23:43.424357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:28.777 [2024-07-23 00:23:43.424367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:28.778 [2024-07-23 00:23:43.424377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:28.778 [2024-07-23 00:23:43.424388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:28.778 [2024-07-23 00:23:43.424399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:28.778 [2024-07-23 00:23:43.424409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:28.778 [2024-07-23 00:23:43.424430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:28.778 [2024-07-23 00:23:43.424440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:28.778 [2024-07-23 00:23:43.424450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:28.778 [2024-07-23 00:23:43.424460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:28.778 [2024-07-23 00:23:43.424471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:28.778 [2024-07-23 00:23:43.424482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:28.778 [2024-07-23 00:23:43.424493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:28.778 [2024-07-23 00:23:43.424503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:28.778 [2024-07-23 00:23:43.424513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:28.778 [2024-07-23 00:23:43.424523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:28.778 [2024-07-23 00:23:43.424536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:28.778 [2024-07-23 00:23:43.424547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:28.778 [2024-07-23 00:23:43.424557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:28.778 [2024-07-23 00:23:43.424568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:28.778 [2024-07-23 00:23:43.424579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:28.778 [2024-07-23 00:23:43.424590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:28.778 [2024-07-23 00:23:43.424600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:28.778 [2024-07-23 00:23:43.424611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:28.778 [2024-07-23 00:23:43.424622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:28.778 [2024-07-23 00:23:43.424632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:28.778 [2024-07-23 00:23:43.424642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:28.778 [2024-07-23 00:23:43.424652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:28.778 [2024-07-23 00:23:43.424663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:28.778 [2024-07-23 00:23:43.424673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:28.778 [2024-07-23 00:23:43.424683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:28.778 [2024-07-23 00:23:43.424694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:28.778 [2024-07-23 00:23:43.424704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:28.778 [2024-07-23 00:23:43.424714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:28.778 [2024-07-23 00:23:43.424724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:28.778 [2024-07-23 00:23:43.424740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:28.778 [2024-07-23 00:23:43.424750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:28.778 [2024-07-23 00:23:43.424761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:28.778 [2024-07-23 00:23:43.424772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:28.778 [2024-07-23 00:23:43.424782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:28.778 [2024-07-23 00:23:43.424793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:28.778 [2024-07-23 00:23:43.424803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:28.778 [2024-07-23 00:23:43.424814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:28.778 [2024-07-23 00:23:43.424825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:28.778 [2024-07-23 00:23:43.424835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:28.778 [2024-07-23 00:23:43.424845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:28.778 [2024-07-23 00:23:43.424855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:28.778 [2024-07-23 00:23:43.424866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:28.778 [2024-07-23 00:23:43.424876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:28.778 [2024-07-23 00:23:43.424887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:28.778 [2024-07-23 00:23:43.424898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:28.778 [2024-07-23 00:23:43.424915] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:28.778 [2024-07-23 00:23:43.424925] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 7c24566a-36b9-43b7-8fcf-8e163863318c 00:19:28.778 [2024-07-23 00:23:43.424936] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 115200 00:19:28.778 [2024-07-23 00:23:43.424949] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 116160 00:19:28.778 [2024-07-23 00:23:43.424959] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 115200 00:19:28.778 [2024-07-23 00:23:43.424969] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0083 00:19:28.778 [2024-07-23 00:23:43.424978] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:28.778 [2024-07-23 00:23:43.424989] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:28.778 [2024-07-23 00:23:43.424999] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:28.778 [2024-07-23 00:23:43.425015] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:28.778 [2024-07-23 00:23:43.425024] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:28.778 [2024-07-23 00:23:43.425034] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.778 [2024-07-23 00:23:43.425044] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:28.778 [2024-07-23 00:23:43.425055] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.237 ms 00:19:28.778 [2024-07-23 00:23:43.425071] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.778 [2024-07-23 00:23:43.426784] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.778 [2024-07-23 00:23:43.426807] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:28.778 [2024-07-23 00:23:43.426819] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.699 ms 00:19:28.778 [2024-07-23 00:23:43.426829] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.778 [2024-07-23 00:23:43.426932] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.778 [2024-07-23 00:23:43.426943] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:28.778 [2024-07-23 00:23:43.426954] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.085 ms 00:19:28.778 [2024-07-23 00:23:43.426973] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.778 [2024-07-23 00:23:43.432925] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:28.778 [2024-07-23 00:23:43.432949] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:28.778 [2024-07-23 00:23:43.432961] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:28.778 [2024-07-23 00:23:43.432987] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.778 [2024-07-23 00:23:43.433041] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:28.778 [2024-07-23 00:23:43.433052] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:28.778 [2024-07-23 00:23:43.433064] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:28.778 [2024-07-23 00:23:43.433078] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.778 [2024-07-23 00:23:43.433138] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:28.778 [2024-07-23 00:23:43.433151] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:28.778 [2024-07-23 00:23:43.433161] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:28.778 [2024-07-23 00:23:43.433172] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.778 [2024-07-23 00:23:43.433188] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:28.778 [2024-07-23 00:23:43.433199] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:28.778 [2024-07-23 00:23:43.433208] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:28.778 [2024-07-23 00:23:43.433226] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.778 [2024-07-23 00:23:43.444941] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:28.778 [2024-07-23 00:23:43.444986] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:28.778 [2024-07-23 00:23:43.445023] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:28.778 [2024-07-23 00:23:43.445033] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.778 [2024-07-23 00:23:43.453403] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:28.778 [2024-07-23 00:23:43.453445] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:28.778 [2024-07-23 00:23:43.453459] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:28.778 [2024-07-23 00:23:43.453470] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.778 [2024-07-23 00:23:43.453530] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:28.778 [2024-07-23 00:23:43.453541] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:28.778 [2024-07-23 00:23:43.453552] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:28.778 [2024-07-23 00:23:43.453562] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.778 [2024-07-23 00:23:43.453588] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:28.778 [2024-07-23 00:23:43.453597] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:28.778 [2024-07-23 00:23:43.453607] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:28.778 [2024-07-23 00:23:43.453617] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:29.038 [2024-07-23 00:23:43.453824] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:29.038 [2024-07-23 00:23:43.453838] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:29.038 [2024-07-23 00:23:43.453848] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:29.038 [2024-07-23 00:23:43.453858] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:29.038 [2024-07-23 00:23:43.453895] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:29.038 [2024-07-23 00:23:43.453907] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:29.038 [2024-07-23 00:23:43.453917] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:29.038 [2024-07-23 00:23:43.453927] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:29.038 [2024-07-23 00:23:43.453965] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:29.038 [2024-07-23 00:23:43.453980] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:29.038 [2024-07-23 00:23:43.453990] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:29.038 [2024-07-23 00:23:43.453999] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:29.038 [2024-07-23 00:23:43.454041] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:29.038 [2024-07-23 00:23:43.454052] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:29.038 [2024-07-23 00:23:43.454063] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:29.038 [2024-07-23 00:23:43.454072] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:29.038 [2024-07-23 00:23:43.454201] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 213.023 ms, result 0 00:19:29.975 00:19:29.975 00:19:29.975 00:23:44 ftl.ftl_restore -- ftl/restore.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --skip=131072 --count=262144 00:19:29.975 [2024-07-23 00:23:44.400755] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:19:29.975 [2024-07-23 00:23:44.400893] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90875 ] 00:19:29.975 [2024-07-23 00:23:44.552512] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:29.975 [2024-07-23 00:23:44.595165] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:30.234 [2024-07-23 00:23:44.696469] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:30.234 [2024-07-23 00:23:44.696540] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:30.234 [2024-07-23 00:23:44.847188] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.234 [2024-07-23 00:23:44.847245] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:30.234 [2024-07-23 00:23:44.847271] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:19:30.234 [2024-07-23 00:23:44.847290] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.234 [2024-07-23 00:23:44.847343] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.234 [2024-07-23 00:23:44.847356] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:30.234 [2024-07-23 00:23:44.847367] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:19:30.234 [2024-07-23 00:23:44.847379] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.235 [2024-07-23 00:23:44.847408] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:30.235 [2024-07-23 00:23:44.847611] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:30.235 [2024-07-23 00:23:44.847629] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.235 [2024-07-23 00:23:44.847643] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:30.235 [2024-07-23 00:23:44.847653] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.234 ms 00:19:30.235 [2024-07-23 00:23:44.847670] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.235 [2024-07-23 00:23:44.849170] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:19:30.235 [2024-07-23 00:23:44.851649] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.235 [2024-07-23 00:23:44.851694] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:19:30.235 [2024-07-23 00:23:44.851717] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.484 ms 00:19:30.235 [2024-07-23 00:23:44.851731] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.235 [2024-07-23 00:23:44.851792] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.235 [2024-07-23 00:23:44.851804] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:19:30.235 [2024-07-23 00:23:44.851815] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:19:30.235 [2024-07-23 00:23:44.851825] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.235 [2024-07-23 00:23:44.858520] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.235 [2024-07-23 00:23:44.858551] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:30.235 [2024-07-23 00:23:44.858563] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.652 ms 00:19:30.235 [2024-07-23 00:23:44.858572] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.235 [2024-07-23 00:23:44.858668] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.235 [2024-07-23 00:23:44.858684] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:30.235 [2024-07-23 00:23:44.858695] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:19:30.235 [2024-07-23 00:23:44.858711] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.235 [2024-07-23 00:23:44.858769] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.235 [2024-07-23 00:23:44.858781] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:30.235 [2024-07-23 00:23:44.858797] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:19:30.235 [2024-07-23 00:23:44.858807] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.235 [2024-07-23 00:23:44.858831] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:30.235 [2024-07-23 00:23:44.860443] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.235 [2024-07-23 00:23:44.860470] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:30.235 [2024-07-23 00:23:44.860481] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.621 ms 00:19:30.235 [2024-07-23 00:23:44.860491] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.235 [2024-07-23 00:23:44.860524] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.235 [2024-07-23 00:23:44.860535] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:30.235 [2024-07-23 00:23:44.860551] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:19:30.235 [2024-07-23 00:23:44.860561] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.235 [2024-07-23 00:23:44.860583] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:19:30.235 [2024-07-23 00:23:44.860607] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:19:30.235 [2024-07-23 00:23:44.860647] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:19:30.235 [2024-07-23 00:23:44.860671] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:19:30.235 [2024-07-23 00:23:44.860755] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:19:30.235 [2024-07-23 00:23:44.860772] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:30.235 [2024-07-23 00:23:44.860791] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:19:30.235 [2024-07-23 00:23:44.860810] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:30.235 [2024-07-23 00:23:44.860822] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:30.235 [2024-07-23 00:23:44.860839] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:19:30.235 [2024-07-23 00:23:44.860849] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:30.235 [2024-07-23 00:23:44.860858] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:19:30.235 [2024-07-23 00:23:44.860868] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:19:30.235 [2024-07-23 00:23:44.860878] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.235 [2024-07-23 00:23:44.860893] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:30.235 [2024-07-23 00:23:44.860904] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.298 ms 00:19:30.235 [2024-07-23 00:23:44.860916] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.235 [2024-07-23 00:23:44.860985] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.235 [2024-07-23 00:23:44.860996] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:30.235 [2024-07-23 00:23:44.861016] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:19:30.235 [2024-07-23 00:23:44.861026] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.235 [2024-07-23 00:23:44.861108] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:30.235 [2024-07-23 00:23:44.861120] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:30.235 [2024-07-23 00:23:44.861130] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:30.235 [2024-07-23 00:23:44.861140] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:30.235 [2024-07-23 00:23:44.861155] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:30.235 [2024-07-23 00:23:44.861164] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:30.235 [2024-07-23 00:23:44.861173] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:19:30.235 [2024-07-23 00:23:44.861182] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:30.235 [2024-07-23 00:23:44.861193] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:19:30.235 [2024-07-23 00:23:44.861203] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:30.235 [2024-07-23 00:23:44.861212] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:30.235 [2024-07-23 00:23:44.861221] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:19:30.235 [2024-07-23 00:23:44.861230] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:30.235 [2024-07-23 00:23:44.861240] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:30.235 [2024-07-23 00:23:44.861248] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:19:30.235 [2024-07-23 00:23:44.861257] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:30.235 [2024-07-23 00:23:44.861276] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:30.235 [2024-07-23 00:23:44.861286] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:19:30.235 [2024-07-23 00:23:44.861294] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:30.235 [2024-07-23 00:23:44.861303] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:30.235 [2024-07-23 00:23:44.861313] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:19:30.235 [2024-07-23 00:23:44.861322] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:30.235 [2024-07-23 00:23:44.861330] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:30.235 [2024-07-23 00:23:44.861339] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:19:30.235 [2024-07-23 00:23:44.861351] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:30.235 [2024-07-23 00:23:44.861359] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:30.235 [2024-07-23 00:23:44.861368] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:19:30.235 [2024-07-23 00:23:44.861378] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:30.235 [2024-07-23 00:23:44.861387] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:30.235 [2024-07-23 00:23:44.861396] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:19:30.235 [2024-07-23 00:23:44.861404] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:30.235 [2024-07-23 00:23:44.861413] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:30.235 [2024-07-23 00:23:44.861422] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:19:30.235 [2024-07-23 00:23:44.861431] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:30.235 [2024-07-23 00:23:44.861441] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:30.235 [2024-07-23 00:23:44.861449] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:19:30.235 [2024-07-23 00:23:44.861458] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:30.235 [2024-07-23 00:23:44.861467] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:19:30.235 [2024-07-23 00:23:44.861476] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:19:30.235 [2024-07-23 00:23:44.861484] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:30.235 [2024-07-23 00:23:44.861496] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:19:30.235 [2024-07-23 00:23:44.861505] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:19:30.235 [2024-07-23 00:23:44.861513] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:30.235 [2024-07-23 00:23:44.861522] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:30.235 [2024-07-23 00:23:44.861534] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:30.235 [2024-07-23 00:23:44.861544] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:30.236 [2024-07-23 00:23:44.861560] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:30.236 [2024-07-23 00:23:44.861570] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:30.236 [2024-07-23 00:23:44.861580] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:30.236 [2024-07-23 00:23:44.861588] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:30.236 [2024-07-23 00:23:44.861598] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:30.236 [2024-07-23 00:23:44.861607] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:30.236 [2024-07-23 00:23:44.861616] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:30.236 [2024-07-23 00:23:44.861626] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:30.236 [2024-07-23 00:23:44.861637] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:30.236 [2024-07-23 00:23:44.861655] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:19:30.236 [2024-07-23 00:23:44.861668] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:19:30.236 [2024-07-23 00:23:44.861679] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:19:30.236 [2024-07-23 00:23:44.861689] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:19:30.236 [2024-07-23 00:23:44.861699] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:19:30.236 [2024-07-23 00:23:44.861709] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:19:30.236 [2024-07-23 00:23:44.861719] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:19:30.236 [2024-07-23 00:23:44.861729] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:19:30.236 [2024-07-23 00:23:44.861740] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:19:30.236 [2024-07-23 00:23:44.861750] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:19:30.236 [2024-07-23 00:23:44.861760] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:19:30.236 [2024-07-23 00:23:44.861770] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:19:30.236 [2024-07-23 00:23:44.861780] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:19:30.236 [2024-07-23 00:23:44.861790] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:19:30.236 [2024-07-23 00:23:44.861800] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:30.236 [2024-07-23 00:23:44.861811] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:30.236 [2024-07-23 00:23:44.861822] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:30.236 [2024-07-23 00:23:44.861835] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:30.236 [2024-07-23 00:23:44.861853] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:30.236 [2024-07-23 00:23:44.861864] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:30.236 [2024-07-23 00:23:44.861875] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.236 [2024-07-23 00:23:44.861886] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:30.236 [2024-07-23 00:23:44.861896] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.819 ms 00:19:30.236 [2024-07-23 00:23:44.861908] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.236 [2024-07-23 00:23:44.883714] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.236 [2024-07-23 00:23:44.883757] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:30.236 [2024-07-23 00:23:44.883774] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.793 ms 00:19:30.236 [2024-07-23 00:23:44.883787] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.236 [2024-07-23 00:23:44.883887] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.236 [2024-07-23 00:23:44.883901] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:30.236 [2024-07-23 00:23:44.883914] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:19:30.236 [2024-07-23 00:23:44.883926] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.236 [2024-07-23 00:23:44.894826] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.236 [2024-07-23 00:23:44.894863] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:30.236 [2024-07-23 00:23:44.894899] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.845 ms 00:19:30.236 [2024-07-23 00:23:44.894909] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.236 [2024-07-23 00:23:44.894947] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.236 [2024-07-23 00:23:44.894958] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:30.236 [2024-07-23 00:23:44.894979] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:19:30.236 [2024-07-23 00:23:44.895000] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.236 [2024-07-23 00:23:44.895487] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.236 [2024-07-23 00:23:44.895509] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:30.236 [2024-07-23 00:23:44.895520] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.429 ms 00:19:30.236 [2024-07-23 00:23:44.895530] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.236 [2024-07-23 00:23:44.895646] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.236 [2024-07-23 00:23:44.895665] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:30.236 [2024-07-23 00:23:44.895675] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.094 ms 00:19:30.236 [2024-07-23 00:23:44.895684] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.236 [2024-07-23 00:23:44.901677] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.236 [2024-07-23 00:23:44.901712] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:30.236 [2024-07-23 00:23:44.901725] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.978 ms 00:19:30.236 [2024-07-23 00:23:44.901735] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.236 [2024-07-23 00:23:44.904314] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:19:30.236 [2024-07-23 00:23:44.904348] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:19:30.236 [2024-07-23 00:23:44.904366] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.236 [2024-07-23 00:23:44.904377] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:19:30.236 [2024-07-23 00:23:44.904387] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.537 ms 00:19:30.236 [2024-07-23 00:23:44.904397] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.495 [2024-07-23 00:23:44.916946] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.495 [2024-07-23 00:23:44.916984] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:19:30.495 [2024-07-23 00:23:44.917014] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.530 ms 00:19:30.495 [2024-07-23 00:23:44.917024] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.495 [2024-07-23 00:23:44.918773] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.495 [2024-07-23 00:23:44.918804] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:19:30.495 [2024-07-23 00:23:44.918817] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.704 ms 00:19:30.495 [2024-07-23 00:23:44.918827] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.495 [2024-07-23 00:23:44.920243] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.495 [2024-07-23 00:23:44.920292] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:19:30.495 [2024-07-23 00:23:44.920305] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.384 ms 00:19:30.495 [2024-07-23 00:23:44.920319] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.495 [2024-07-23 00:23:44.920604] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.495 [2024-07-23 00:23:44.920625] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:30.495 [2024-07-23 00:23:44.920637] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.224 ms 00:19:30.495 [2024-07-23 00:23:44.920655] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.495 [2024-07-23 00:23:44.940650] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.495 [2024-07-23 00:23:44.940730] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:19:30.495 [2024-07-23 00:23:44.940747] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.002 ms 00:19:30.495 [2024-07-23 00:23:44.940767] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.495 [2024-07-23 00:23:44.946916] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:19:30.495 [2024-07-23 00:23:44.949531] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.495 [2024-07-23 00:23:44.949559] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:30.495 [2024-07-23 00:23:44.949574] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.732 ms 00:19:30.495 [2024-07-23 00:23:44.949584] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.495 [2024-07-23 00:23:44.949637] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.495 [2024-07-23 00:23:44.949649] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:19:30.495 [2024-07-23 00:23:44.949659] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:19:30.495 [2024-07-23 00:23:44.949676] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.495 [2024-07-23 00:23:44.951337] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.495 [2024-07-23 00:23:44.951380] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:30.495 [2024-07-23 00:23:44.951397] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.614 ms 00:19:30.495 [2024-07-23 00:23:44.951406] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.495 [2024-07-23 00:23:44.951436] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.495 [2024-07-23 00:23:44.951447] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:30.495 [2024-07-23 00:23:44.951456] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:19:30.495 [2024-07-23 00:23:44.951466] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.495 [2024-07-23 00:23:44.951501] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:19:30.495 [2024-07-23 00:23:44.951513] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.495 [2024-07-23 00:23:44.951523] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:19:30.495 [2024-07-23 00:23:44.951536] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:19:30.495 [2024-07-23 00:23:44.951555] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.495 [2024-07-23 00:23:44.955210] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.495 [2024-07-23 00:23:44.955246] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:30.495 [2024-07-23 00:23:44.955288] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.642 ms 00:19:30.495 [2024-07-23 00:23:44.955300] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.495 [2024-07-23 00:23:44.955364] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.495 [2024-07-23 00:23:44.955377] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:30.495 [2024-07-23 00:23:44.955388] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:19:30.495 [2024-07-23 00:23:44.955402] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.496 [2024-07-23 00:23:44.960134] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 112.166 ms, result 0 00:20:08.054  Copying: 26/1024 [MB] (26 MBps) Copying: 54/1024 [MB] (27 MBps) Copying: 82/1024 [MB] (27 MBps) Copying: 110/1024 [MB] (27 MBps) Copying: 137/1024 [MB] (27 MBps) Copying: 165/1024 [MB] (27 MBps) Copying: 192/1024 [MB] (27 MBps) Copying: 220/1024 [MB] (27 MBps) Copying: 247/1024 [MB] (27 MBps) Copying: 274/1024 [MB] (27 MBps) Copying: 302/1024 [MB] (27 MBps) Copying: 329/1024 [MB] (27 MBps) Copying: 357/1024 [MB] (27 MBps) Copying: 384/1024 [MB] (27 MBps) Copying: 412/1024 [MB] (27 MBps) Copying: 439/1024 [MB] (27 MBps) Copying: 467/1024 [MB] (27 MBps) Copying: 494/1024 [MB] (27 MBps) Copying: 521/1024 [MB] (27 MBps) Copying: 549/1024 [MB] (27 MBps) Copying: 576/1024 [MB] (27 MBps) Copying: 603/1024 [MB] (27 MBps) Copying: 631/1024 [MB] (27 MBps) Copying: 658/1024 [MB] (27 MBps) Copying: 685/1024 [MB] (27 MBps) Copying: 713/1024 [MB] (27 MBps) Copying: 740/1024 [MB] (27 MBps) Copying: 769/1024 [MB] (28 MBps) Copying: 796/1024 [MB] (27 MBps) Copying: 823/1024 [MB] (26 MBps) Copying: 850/1024 [MB] (26 MBps) Copying: 877/1024 [MB] (26 MBps) Copying: 904/1024 [MB] (27 MBps) Copying: 931/1024 [MB] (27 MBps) Copying: 959/1024 [MB] (27 MBps) Copying: 986/1024 [MB] (27 MBps) Copying: 1013/1024 [MB] (26 MBps) Copying: 1024/1024 [MB] (average 27 MBps)[2024-07-23 00:24:22.710994] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:08.054 [2024-07-23 00:24:22.711063] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:08.054 [2024-07-23 00:24:22.711082] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:08.054 [2024-07-23 00:24:22.711096] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.054 [2024-07-23 00:24:22.711127] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:08.054 [2024-07-23 00:24:22.711846] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:08.054 [2024-07-23 00:24:22.711878] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:08.054 [2024-07-23 00:24:22.711896] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.700 ms 00:20:08.054 [2024-07-23 00:24:22.711908] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.054 [2024-07-23 00:24:22.712136] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:08.054 [2024-07-23 00:24:22.712157] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:08.054 [2024-07-23 00:24:22.712170] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.204 ms 00:20:08.054 [2024-07-23 00:24:22.712181] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.054 [2024-07-23 00:24:22.717643] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:08.054 [2024-07-23 00:24:22.717689] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:08.054 [2024-07-23 00:24:22.717705] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.449 ms 00:20:08.054 [2024-07-23 00:24:22.717725] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.054 [2024-07-23 00:24:22.723245] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:08.054 [2024-07-23 00:24:22.723284] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:20:08.054 [2024-07-23 00:24:22.723296] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.485 ms 00:20:08.054 [2024-07-23 00:24:22.723305] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.054 [2024-07-23 00:24:22.724881] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:08.054 [2024-07-23 00:24:22.724919] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:08.054 [2024-07-23 00:24:22.724931] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.504 ms 00:20:08.054 [2024-07-23 00:24:22.724941] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.054 [2024-07-23 00:24:22.728602] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:08.054 [2024-07-23 00:24:22.728641] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:08.054 [2024-07-23 00:24:22.728653] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.637 ms 00:20:08.054 [2024-07-23 00:24:22.728677] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.315 [2024-07-23 00:24:22.871574] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:08.315 [2024-07-23 00:24:22.871638] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:08.315 [2024-07-23 00:24:22.871653] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 143.093 ms 00:20:08.315 [2024-07-23 00:24:22.871664] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.315 [2024-07-23 00:24:22.873713] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:08.315 [2024-07-23 00:24:22.873746] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:20:08.315 [2024-07-23 00:24:22.873758] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.036 ms 00:20:08.315 [2024-07-23 00:24:22.873768] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.315 [2024-07-23 00:24:22.875106] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:08.315 [2024-07-23 00:24:22.875141] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:20:08.315 [2024-07-23 00:24:22.875152] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.312 ms 00:20:08.315 [2024-07-23 00:24:22.875161] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.315 [2024-07-23 00:24:22.876333] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:08.315 [2024-07-23 00:24:22.876368] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:08.315 [2024-07-23 00:24:22.876379] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.147 ms 00:20:08.315 [2024-07-23 00:24:22.876388] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.315 [2024-07-23 00:24:22.877383] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:08.315 [2024-07-23 00:24:22.877417] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:08.315 [2024-07-23 00:24:22.877429] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.945 ms 00:20:08.315 [2024-07-23 00:24:22.877438] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.315 [2024-07-23 00:24:22.877464] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:08.315 [2024-07-23 00:24:22.877491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 133632 / 261120 wr_cnt: 1 state: open 00:20:08.315 [2024-07-23 00:24:22.877504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:08.315 [2024-07-23 00:24:22.877515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:08.315 [2024-07-23 00:24:22.877525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:08.315 [2024-07-23 00:24:22.877536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:08.315 [2024-07-23 00:24:22.877547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:08.315 [2024-07-23 00:24:22.877557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:08.315 [2024-07-23 00:24:22.877568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:08.315 [2024-07-23 00:24:22.877578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:08.315 [2024-07-23 00:24:22.877589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:08.315 [2024-07-23 00:24:22.877599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:08.315 [2024-07-23 00:24:22.877609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:08.315 [2024-07-23 00:24:22.877619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:08.315 [2024-07-23 00:24:22.877630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:08.315 [2024-07-23 00:24:22.877640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:08.315 [2024-07-23 00:24:22.877651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:08.315 [2024-07-23 00:24:22.877661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:08.315 [2024-07-23 00:24:22.877671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:08.315 [2024-07-23 00:24:22.877682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:08.315 [2024-07-23 00:24:22.877692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:08.315 [2024-07-23 00:24:22.877702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:08.315 [2024-07-23 00:24:22.877713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:08.315 [2024-07-23 00:24:22.877723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:08.315 [2024-07-23 00:24:22.877733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:08.315 [2024-07-23 00:24:22.877744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:08.315 [2024-07-23 00:24:22.877754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:08.315 [2024-07-23 00:24:22.877764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:08.315 [2024-07-23 00:24:22.877774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:08.315 [2024-07-23 00:24:22.877785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:08.315 [2024-07-23 00:24:22.877795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:08.315 [2024-07-23 00:24:22.877805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:08.316 [2024-07-23 00:24:22.877816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:08.316 [2024-07-23 00:24:22.877827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:08.316 [2024-07-23 00:24:22.877837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:08.316 [2024-07-23 00:24:22.877848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:08.316 [2024-07-23 00:24:22.877858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:08.316 [2024-07-23 00:24:22.877868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:08.316 [2024-07-23 00:24:22.877879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:08.316 [2024-07-23 00:24:22.877889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:08.316 [2024-07-23 00:24:22.877900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:08.316 [2024-07-23 00:24:22.877910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:08.316 [2024-07-23 00:24:22.877921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:08.316 [2024-07-23 00:24:22.877931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:08.316 [2024-07-23 00:24:22.877941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:08.316 [2024-07-23 00:24:22.877952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:08.316 [2024-07-23 00:24:22.877962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:08.316 [2024-07-23 00:24:22.877972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:08.316 [2024-07-23 00:24:22.877982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:08.316 [2024-07-23 00:24:22.877993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:08.316 [2024-07-23 00:24:22.878003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:08.316 [2024-07-23 00:24:22.878013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:08.316 [2024-07-23 00:24:22.878023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:08.316 [2024-07-23 00:24:22.878033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:08.316 [2024-07-23 00:24:22.878044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:08.316 [2024-07-23 00:24:22.878054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:08.316 [2024-07-23 00:24:22.878065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:08.316 [2024-07-23 00:24:22.878085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:08.316 [2024-07-23 00:24:22.878096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:08.316 [2024-07-23 00:24:22.878106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:08.316 [2024-07-23 00:24:22.878117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:08.316 [2024-07-23 00:24:22.878128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:08.316 [2024-07-23 00:24:22.878138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:08.316 [2024-07-23 00:24:22.878149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:08.316 [2024-07-23 00:24:22.878159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:08.316 [2024-07-23 00:24:22.878170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:08.316 [2024-07-23 00:24:22.878181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:08.316 [2024-07-23 00:24:22.878191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:08.316 [2024-07-23 00:24:22.878202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:08.316 [2024-07-23 00:24:22.878212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:08.316 [2024-07-23 00:24:22.878222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:08.316 [2024-07-23 00:24:22.878233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:08.316 [2024-07-23 00:24:22.878243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:08.316 [2024-07-23 00:24:22.878253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:08.316 [2024-07-23 00:24:22.878274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:08.316 [2024-07-23 00:24:22.878285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:08.316 [2024-07-23 00:24:22.878296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:08.316 [2024-07-23 00:24:22.878306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:08.316 [2024-07-23 00:24:22.878317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:08.316 [2024-07-23 00:24:22.878328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:08.316 [2024-07-23 00:24:22.878338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:08.316 [2024-07-23 00:24:22.878349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:08.316 [2024-07-23 00:24:22.878359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:08.316 [2024-07-23 00:24:22.878369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:08.316 [2024-07-23 00:24:22.878380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:08.316 [2024-07-23 00:24:22.878390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:08.316 [2024-07-23 00:24:22.878401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:08.316 [2024-07-23 00:24:22.878411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:08.316 [2024-07-23 00:24:22.878421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:08.316 [2024-07-23 00:24:22.878432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:08.316 [2024-07-23 00:24:22.878442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:08.316 [2024-07-23 00:24:22.878453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:08.316 [2024-07-23 00:24:22.878463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:08.316 [2024-07-23 00:24:22.878474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:08.316 [2024-07-23 00:24:22.878484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:08.316 [2024-07-23 00:24:22.878494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:08.316 [2024-07-23 00:24:22.878505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:08.316 [2024-07-23 00:24:22.878517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:08.316 [2024-07-23 00:24:22.878528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:08.316 [2024-07-23 00:24:22.878539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:08.316 [2024-07-23 00:24:22.878549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:08.316 [2024-07-23 00:24:22.878567] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:08.316 [2024-07-23 00:24:22.878577] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 7c24566a-36b9-43b7-8fcf-8e163863318c 00:20:08.316 [2024-07-23 00:24:22.878588] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 133632 00:20:08.316 [2024-07-23 00:24:22.878601] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 19392 00:20:08.316 [2024-07-23 00:24:22.878611] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 18432 00:20:08.316 [2024-07-23 00:24:22.878621] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0521 00:20:08.316 [2024-07-23 00:24:22.878631] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:08.316 [2024-07-23 00:24:22.878641] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:08.316 [2024-07-23 00:24:22.878651] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:08.316 [2024-07-23 00:24:22.878660] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:08.316 [2024-07-23 00:24:22.878669] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:08.316 [2024-07-23 00:24:22.878678] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:08.316 [2024-07-23 00:24:22.878689] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:08.316 [2024-07-23 00:24:22.878705] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.217 ms 00:20:08.316 [2024-07-23 00:24:22.878715] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.316 [2024-07-23 00:24:22.880404] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:08.316 [2024-07-23 00:24:22.880433] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:08.316 [2024-07-23 00:24:22.880445] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.675 ms 00:20:08.316 [2024-07-23 00:24:22.880455] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.316 [2024-07-23 00:24:22.880558] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:08.316 [2024-07-23 00:24:22.880569] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:08.316 [2024-07-23 00:24:22.880586] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.085 ms 00:20:08.316 [2024-07-23 00:24:22.880602] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.316 [2024-07-23 00:24:22.886599] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:08.317 [2024-07-23 00:24:22.886622] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:08.317 [2024-07-23 00:24:22.886635] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:08.317 [2024-07-23 00:24:22.886645] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.317 [2024-07-23 00:24:22.886687] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:08.317 [2024-07-23 00:24:22.886701] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:08.317 [2024-07-23 00:24:22.886711] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:08.317 [2024-07-23 00:24:22.886724] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.317 [2024-07-23 00:24:22.886794] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:08.317 [2024-07-23 00:24:22.886806] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:08.317 [2024-07-23 00:24:22.886824] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:08.317 [2024-07-23 00:24:22.886833] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.317 [2024-07-23 00:24:22.886849] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:08.317 [2024-07-23 00:24:22.886859] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:08.317 [2024-07-23 00:24:22.886876] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:08.317 [2024-07-23 00:24:22.886885] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.317 [2024-07-23 00:24:22.898085] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:08.317 [2024-07-23 00:24:22.898139] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:08.317 [2024-07-23 00:24:22.898152] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:08.317 [2024-07-23 00:24:22.898171] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.317 [2024-07-23 00:24:22.906403] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:08.317 [2024-07-23 00:24:22.906448] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:08.317 [2024-07-23 00:24:22.906460] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:08.317 [2024-07-23 00:24:22.906469] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.317 [2024-07-23 00:24:22.906536] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:08.317 [2024-07-23 00:24:22.906547] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:08.317 [2024-07-23 00:24:22.906557] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:08.317 [2024-07-23 00:24:22.906567] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.317 [2024-07-23 00:24:22.906591] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:08.317 [2024-07-23 00:24:22.906602] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:08.317 [2024-07-23 00:24:22.906613] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:08.317 [2024-07-23 00:24:22.906622] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.317 [2024-07-23 00:24:22.906701] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:08.317 [2024-07-23 00:24:22.906716] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:08.317 [2024-07-23 00:24:22.906726] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:08.317 [2024-07-23 00:24:22.906736] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.317 [2024-07-23 00:24:22.906768] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:08.317 [2024-07-23 00:24:22.906779] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:08.317 [2024-07-23 00:24:22.906790] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:08.317 [2024-07-23 00:24:22.906800] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.317 [2024-07-23 00:24:22.906842] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:08.317 [2024-07-23 00:24:22.906872] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:08.317 [2024-07-23 00:24:22.906884] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:08.317 [2024-07-23 00:24:22.906893] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.317 [2024-07-23 00:24:22.906933] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:08.317 [2024-07-23 00:24:22.906944] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:08.317 [2024-07-23 00:24:22.906954] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:08.317 [2024-07-23 00:24:22.906964] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.317 [2024-07-23 00:24:22.907082] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 196.381 ms, result 0 00:20:08.576 00:20:08.576 00:20:08.576 00:24:23 ftl.ftl_restore -- ftl/restore.sh@82 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:20:10.503 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:20:10.503 00:24:24 ftl.ftl_restore -- ftl/restore.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:20:10.503 00:24:24 ftl.ftl_restore -- ftl/restore.sh@85 -- # restore_kill 00:20:10.503 00:24:24 ftl.ftl_restore -- ftl/restore.sh@28 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:20:10.503 00:24:24 ftl.ftl_restore -- ftl/restore.sh@29 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:20:10.503 00:24:24 ftl.ftl_restore -- ftl/restore.sh@30 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:20:10.503 00:24:24 ftl.ftl_restore -- ftl/restore.sh@32 -- # killprocess 89417 00:20:10.503 00:24:24 ftl.ftl_restore -- common/autotest_common.sh@946 -- # '[' -z 89417 ']' 00:20:10.503 00:24:24 ftl.ftl_restore -- common/autotest_common.sh@950 -- # kill -0 89417 00:20:10.503 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 950: kill: (89417) - No such process 00:20:10.503 Process with pid 89417 is not found 00:20:10.503 00:24:24 ftl.ftl_restore -- common/autotest_common.sh@973 -- # echo 'Process with pid 89417 is not found' 00:20:10.503 Remove shared memory files 00:20:10.503 00:24:24 ftl.ftl_restore -- ftl/restore.sh@33 -- # remove_shm 00:20:10.503 00:24:24 ftl.ftl_restore -- ftl/common.sh@204 -- # echo Remove shared memory files 00:20:10.503 00:24:24 ftl.ftl_restore -- ftl/common.sh@205 -- # rm -f rm -f 00:20:10.503 00:24:24 ftl.ftl_restore -- ftl/common.sh@206 -- # rm -f rm -f 00:20:10.503 00:24:24 ftl.ftl_restore -- ftl/common.sh@207 -- # rm -f rm -f 00:20:10.503 00:24:24 ftl.ftl_restore -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:20:10.503 00:24:25 ftl.ftl_restore -- ftl/common.sh@209 -- # rm -f rm -f 00:20:10.503 ************************************ 00:20:10.503 END TEST ftl_restore 00:20:10.503 ************************************ 00:20:10.503 00:20:10.503 real 2m59.857s 00:20:10.503 user 2m49.034s 00:20:10.503 sys 0m12.145s 00:20:10.503 00:24:25 ftl.ftl_restore -- common/autotest_common.sh@1122 -- # xtrace_disable 00:20:10.503 00:24:25 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:20:10.503 00:24:25 ftl -- ftl/ftl.sh@77 -- # run_test ftl_dirty_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:20:10.503 00:24:25 ftl -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:20:10.503 00:24:25 ftl -- common/autotest_common.sh@1103 -- # xtrace_disable 00:20:10.503 00:24:25 ftl -- common/autotest_common.sh@10 -- # set +x 00:20:10.503 ************************************ 00:20:10.503 START TEST ftl_dirty_shutdown 00:20:10.503 ************************************ 00:20:10.503 00:24:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:20:10.797 * Looking for test storage... 00:20:10.797 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:20:10.797 00:24:25 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:20:10.797 00:24:25 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh 00:20:10.797 00:24:25 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:20:10.797 00:24:25 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:20:10.797 00:24:25 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:20:10.797 00:24:25 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:20:10.797 00:24:25 ftl.ftl_dirty_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:10.797 00:24:25 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:20:10.797 00:24:25 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:20:10.797 00:24:25 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:10.797 00:24:25 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:10.798 00:24:25 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:20:10.798 00:24:25 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:20:10.798 00:24:25 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:20:10.798 00:24:25 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:20:10.798 00:24:25 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:20:10.798 00:24:25 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:20:10.798 00:24:25 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:10.798 00:24:25 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:10.798 00:24:25 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:20:10.798 00:24:25 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:20:10.798 00:24:25 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:20:10.798 00:24:25 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:20:10.798 00:24:25 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:20:10.798 00:24:25 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:20:10.798 00:24:25 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:20:10.798 00:24:25 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:20:10.798 00:24:25 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:20:10.798 00:24:25 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:20:10.798 00:24:25 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:10.798 00:24:25 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@12 -- # spdk_dd=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:20:10.798 00:24:25 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:20:10.798 00:24:25 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@15 -- # case $opt in 00:20:10.798 00:24:25 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@17 -- # nv_cache=0000:00:10.0 00:20:10.798 00:24:25 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:20:10.798 00:24:25 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@21 -- # shift 2 00:20:10.798 00:24:25 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@23 -- # device=0000:00:11.0 00:20:10.798 00:24:25 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@24 -- # timeout=240 00:20:10.798 00:24:25 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@26 -- # block_size=4096 00:20:10.798 00:24:25 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@27 -- # chunk_size=262144 00:20:10.798 00:24:25 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@28 -- # data_size=262144 00:20:10.798 00:24:25 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@42 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:20:10.798 00:24:25 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@45 -- # svcpid=91356 00:20:10.798 00:24:25 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@47 -- # waitforlisten 91356 00:20:10.798 00:24:25 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:20:10.798 00:24:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@827 -- # '[' -z 91356 ']' 00:20:10.798 00:24:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:10.798 00:24:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@832 -- # local max_retries=100 00:20:10.798 00:24:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:10.798 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:10.798 00:24:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@836 -- # xtrace_disable 00:20:10.798 00:24:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:20:10.798 [2024-07-23 00:24:25.347555] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:20:10.798 [2024-07-23 00:24:25.347849] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91356 ] 00:20:11.056 [2024-07-23 00:24:25.500204] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:11.057 [2024-07-23 00:24:25.543536] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:11.623 00:24:26 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:20:11.623 00:24:26 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@860 -- # return 0 00:20:11.623 00:24:26 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:20:11.623 00:24:26 ftl.ftl_dirty_shutdown -- ftl/common.sh@54 -- # local name=nvme0 00:20:11.623 00:24:26 ftl.ftl_dirty_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:20:11.623 00:24:26 ftl.ftl_dirty_shutdown -- ftl/common.sh@56 -- # local size=103424 00:20:11.623 00:24:26 ftl.ftl_dirty_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:20:11.623 00:24:26 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:20:11.882 00:24:26 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:20:11.882 00:24:26 ftl.ftl_dirty_shutdown -- ftl/common.sh@62 -- # local base_size 00:20:11.882 00:24:26 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:20:11.882 00:24:26 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1374 -- # local bdev_name=nvme0n1 00:20:11.882 00:24:26 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1375 -- # local bdev_info 00:20:11.882 00:24:26 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1376 -- # local bs 00:20:11.882 00:24:26 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1377 -- # local nb 00:20:11.882 00:24:26 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1378 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:20:12.140 00:24:26 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1378 -- # bdev_info='[ 00:20:12.140 { 00:20:12.140 "name": "nvme0n1", 00:20:12.140 "aliases": [ 00:20:12.140 "6cfd21e4-981a-4a88-bd02-e66e203ec627" 00:20:12.140 ], 00:20:12.140 "product_name": "NVMe disk", 00:20:12.140 "block_size": 4096, 00:20:12.140 "num_blocks": 1310720, 00:20:12.140 "uuid": "6cfd21e4-981a-4a88-bd02-e66e203ec627", 00:20:12.140 "assigned_rate_limits": { 00:20:12.140 "rw_ios_per_sec": 0, 00:20:12.140 "rw_mbytes_per_sec": 0, 00:20:12.140 "r_mbytes_per_sec": 0, 00:20:12.140 "w_mbytes_per_sec": 0 00:20:12.140 }, 00:20:12.140 "claimed": true, 00:20:12.140 "claim_type": "read_many_write_one", 00:20:12.140 "zoned": false, 00:20:12.140 "supported_io_types": { 00:20:12.140 "read": true, 00:20:12.140 "write": true, 00:20:12.140 "unmap": true, 00:20:12.140 "write_zeroes": true, 00:20:12.140 "flush": true, 00:20:12.140 "reset": true, 00:20:12.140 "compare": true, 00:20:12.140 "compare_and_write": false, 00:20:12.140 "abort": true, 00:20:12.140 "nvme_admin": true, 00:20:12.140 "nvme_io": true 00:20:12.140 }, 00:20:12.140 "driver_specific": { 00:20:12.140 "nvme": [ 00:20:12.140 { 00:20:12.140 "pci_address": "0000:00:11.0", 00:20:12.140 "trid": { 00:20:12.140 "trtype": "PCIe", 00:20:12.140 "traddr": "0000:00:11.0" 00:20:12.140 }, 00:20:12.140 "ctrlr_data": { 00:20:12.140 "cntlid": 0, 00:20:12.140 "vendor_id": "0x1b36", 00:20:12.140 "model_number": "QEMU NVMe Ctrl", 00:20:12.140 "serial_number": "12341", 00:20:12.140 "firmware_revision": "8.0.0", 00:20:12.140 "subnqn": "nqn.2019-08.org.qemu:12341", 00:20:12.140 "oacs": { 00:20:12.140 "security": 0, 00:20:12.140 "format": 1, 00:20:12.140 "firmware": 0, 00:20:12.140 "ns_manage": 1 00:20:12.140 }, 00:20:12.140 "multi_ctrlr": false, 00:20:12.140 "ana_reporting": false 00:20:12.141 }, 00:20:12.141 "vs": { 00:20:12.141 "nvme_version": "1.4" 00:20:12.141 }, 00:20:12.141 "ns_data": { 00:20:12.141 "id": 1, 00:20:12.141 "can_share": false 00:20:12.141 } 00:20:12.141 } 00:20:12.141 ], 00:20:12.141 "mp_policy": "active_passive" 00:20:12.141 } 00:20:12.141 } 00:20:12.141 ]' 00:20:12.141 00:24:26 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1379 -- # jq '.[] .block_size' 00:20:12.141 00:24:26 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1379 -- # bs=4096 00:20:12.141 00:24:26 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # jq '.[] .num_blocks' 00:20:12.141 00:24:26 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # nb=1310720 00:20:12.141 00:24:26 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # bdev_size=5120 00:20:12.141 00:24:26 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # echo 5120 00:20:12.141 00:24:26 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:20:12.141 00:24:26 ftl.ftl_dirty_shutdown -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:20:12.141 00:24:26 ftl.ftl_dirty_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:20:12.141 00:24:26 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:20:12.141 00:24:26 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:20:12.399 00:24:26 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # stores=12bf6442-ea37-4f79-8a0b-acfa0cb650de 00:20:12.399 00:24:26 ftl.ftl_dirty_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:20:12.399 00:24:26 ftl.ftl_dirty_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 12bf6442-ea37-4f79-8a0b-acfa0cb650de 00:20:12.399 00:24:27 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:20:12.657 00:24:27 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # lvs=0f2f8dab-a8ff-4851-87ad-068a9590fb27 00:20:12.657 00:24:27 ftl.ftl_dirty_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 0f2f8dab-a8ff-4851-87ad-068a9590fb27 00:20:12.915 00:24:27 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # split_bdev=118b8449-7d65-442e-b2c8-e861ab66175d 00:20:12.915 00:24:27 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@51 -- # '[' -n 0000:00:10.0 ']' 00:20:12.915 00:24:27 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # create_nv_cache_bdev nvc0 0000:00:10.0 118b8449-7d65-442e-b2c8-e861ab66175d 00:20:12.915 00:24:27 ftl.ftl_dirty_shutdown -- ftl/common.sh@35 -- # local name=nvc0 00:20:12.915 00:24:27 ftl.ftl_dirty_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:20:12.915 00:24:27 ftl.ftl_dirty_shutdown -- ftl/common.sh@37 -- # local base_bdev=118b8449-7d65-442e-b2c8-e861ab66175d 00:20:12.915 00:24:27 ftl.ftl_dirty_shutdown -- ftl/common.sh@38 -- # local cache_size= 00:20:12.915 00:24:27 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # get_bdev_size 118b8449-7d65-442e-b2c8-e861ab66175d 00:20:12.915 00:24:27 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1374 -- # local bdev_name=118b8449-7d65-442e-b2c8-e861ab66175d 00:20:12.915 00:24:27 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1375 -- # local bdev_info 00:20:12.915 00:24:27 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1376 -- # local bs 00:20:12.915 00:24:27 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1377 -- # local nb 00:20:12.915 00:24:27 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1378 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 118b8449-7d65-442e-b2c8-e861ab66175d 00:20:13.173 00:24:27 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1378 -- # bdev_info='[ 00:20:13.173 { 00:20:13.173 "name": "118b8449-7d65-442e-b2c8-e861ab66175d", 00:20:13.173 "aliases": [ 00:20:13.173 "lvs/nvme0n1p0" 00:20:13.173 ], 00:20:13.173 "product_name": "Logical Volume", 00:20:13.173 "block_size": 4096, 00:20:13.173 "num_blocks": 26476544, 00:20:13.173 "uuid": "118b8449-7d65-442e-b2c8-e861ab66175d", 00:20:13.173 "assigned_rate_limits": { 00:20:13.173 "rw_ios_per_sec": 0, 00:20:13.173 "rw_mbytes_per_sec": 0, 00:20:13.173 "r_mbytes_per_sec": 0, 00:20:13.173 "w_mbytes_per_sec": 0 00:20:13.173 }, 00:20:13.173 "claimed": false, 00:20:13.173 "zoned": false, 00:20:13.173 "supported_io_types": { 00:20:13.173 "read": true, 00:20:13.173 "write": true, 00:20:13.173 "unmap": true, 00:20:13.173 "write_zeroes": true, 00:20:13.173 "flush": false, 00:20:13.173 "reset": true, 00:20:13.173 "compare": false, 00:20:13.173 "compare_and_write": false, 00:20:13.173 "abort": false, 00:20:13.173 "nvme_admin": false, 00:20:13.173 "nvme_io": false 00:20:13.173 }, 00:20:13.173 "driver_specific": { 00:20:13.173 "lvol": { 00:20:13.173 "lvol_store_uuid": "0f2f8dab-a8ff-4851-87ad-068a9590fb27", 00:20:13.173 "base_bdev": "nvme0n1", 00:20:13.173 "thin_provision": true, 00:20:13.173 "num_allocated_clusters": 0, 00:20:13.173 "snapshot": false, 00:20:13.173 "clone": false, 00:20:13.173 "esnap_clone": false 00:20:13.173 } 00:20:13.173 } 00:20:13.173 } 00:20:13.173 ]' 00:20:13.173 00:24:27 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1379 -- # jq '.[] .block_size' 00:20:13.173 00:24:27 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1379 -- # bs=4096 00:20:13.173 00:24:27 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # jq '.[] .num_blocks' 00:20:13.173 00:24:27 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # nb=26476544 00:20:13.173 00:24:27 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # bdev_size=103424 00:20:13.173 00:24:27 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # echo 103424 00:20:13.173 00:24:27 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # local base_size=5171 00:20:13.173 00:24:27 ftl.ftl_dirty_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:20:13.173 00:24:27 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:20:13.431 00:24:27 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:20:13.431 00:24:27 ftl.ftl_dirty_shutdown -- ftl/common.sh@47 -- # [[ -z '' ]] 00:20:13.431 00:24:27 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # get_bdev_size 118b8449-7d65-442e-b2c8-e861ab66175d 00:20:13.431 00:24:27 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1374 -- # local bdev_name=118b8449-7d65-442e-b2c8-e861ab66175d 00:20:13.431 00:24:27 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1375 -- # local bdev_info 00:20:13.431 00:24:27 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1376 -- # local bs 00:20:13.431 00:24:27 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1377 -- # local nb 00:20:13.431 00:24:27 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1378 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 118b8449-7d65-442e-b2c8-e861ab66175d 00:20:13.689 00:24:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1378 -- # bdev_info='[ 00:20:13.689 { 00:20:13.689 "name": "118b8449-7d65-442e-b2c8-e861ab66175d", 00:20:13.689 "aliases": [ 00:20:13.689 "lvs/nvme0n1p0" 00:20:13.689 ], 00:20:13.689 "product_name": "Logical Volume", 00:20:13.689 "block_size": 4096, 00:20:13.689 "num_blocks": 26476544, 00:20:13.689 "uuid": "118b8449-7d65-442e-b2c8-e861ab66175d", 00:20:13.689 "assigned_rate_limits": { 00:20:13.690 "rw_ios_per_sec": 0, 00:20:13.690 "rw_mbytes_per_sec": 0, 00:20:13.690 "r_mbytes_per_sec": 0, 00:20:13.690 "w_mbytes_per_sec": 0 00:20:13.690 }, 00:20:13.690 "claimed": false, 00:20:13.690 "zoned": false, 00:20:13.690 "supported_io_types": { 00:20:13.690 "read": true, 00:20:13.690 "write": true, 00:20:13.690 "unmap": true, 00:20:13.690 "write_zeroes": true, 00:20:13.690 "flush": false, 00:20:13.690 "reset": true, 00:20:13.690 "compare": false, 00:20:13.690 "compare_and_write": false, 00:20:13.690 "abort": false, 00:20:13.690 "nvme_admin": false, 00:20:13.690 "nvme_io": false 00:20:13.690 }, 00:20:13.690 "driver_specific": { 00:20:13.690 "lvol": { 00:20:13.690 "lvol_store_uuid": "0f2f8dab-a8ff-4851-87ad-068a9590fb27", 00:20:13.690 "base_bdev": "nvme0n1", 00:20:13.690 "thin_provision": true, 00:20:13.690 "num_allocated_clusters": 0, 00:20:13.690 "snapshot": false, 00:20:13.690 "clone": false, 00:20:13.690 "esnap_clone": false 00:20:13.690 } 00:20:13.690 } 00:20:13.690 } 00:20:13.690 ]' 00:20:13.690 00:24:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1379 -- # jq '.[] .block_size' 00:20:13.690 00:24:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1379 -- # bs=4096 00:20:13.690 00:24:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # jq '.[] .num_blocks' 00:20:13.690 00:24:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # nb=26476544 00:20:13.690 00:24:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # bdev_size=103424 00:20:13.690 00:24:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # echo 103424 00:20:13.690 00:24:28 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # cache_size=5171 00:20:13.690 00:24:28 ftl.ftl_dirty_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:20:13.948 00:24:28 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # nvc_bdev=nvc0n1p0 00:20:13.948 00:24:28 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # get_bdev_size 118b8449-7d65-442e-b2c8-e861ab66175d 00:20:13.948 00:24:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1374 -- # local bdev_name=118b8449-7d65-442e-b2c8-e861ab66175d 00:20:13.948 00:24:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1375 -- # local bdev_info 00:20:13.948 00:24:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1376 -- # local bs 00:20:13.948 00:24:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1377 -- # local nb 00:20:13.948 00:24:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1378 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 118b8449-7d65-442e-b2c8-e861ab66175d 00:20:14.206 00:24:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1378 -- # bdev_info='[ 00:20:14.206 { 00:20:14.206 "name": "118b8449-7d65-442e-b2c8-e861ab66175d", 00:20:14.206 "aliases": [ 00:20:14.206 "lvs/nvme0n1p0" 00:20:14.206 ], 00:20:14.206 "product_name": "Logical Volume", 00:20:14.206 "block_size": 4096, 00:20:14.206 "num_blocks": 26476544, 00:20:14.206 "uuid": "118b8449-7d65-442e-b2c8-e861ab66175d", 00:20:14.206 "assigned_rate_limits": { 00:20:14.206 "rw_ios_per_sec": 0, 00:20:14.206 "rw_mbytes_per_sec": 0, 00:20:14.206 "r_mbytes_per_sec": 0, 00:20:14.206 "w_mbytes_per_sec": 0 00:20:14.206 }, 00:20:14.206 "claimed": false, 00:20:14.206 "zoned": false, 00:20:14.206 "supported_io_types": { 00:20:14.206 "read": true, 00:20:14.206 "write": true, 00:20:14.206 "unmap": true, 00:20:14.206 "write_zeroes": true, 00:20:14.206 "flush": false, 00:20:14.206 "reset": true, 00:20:14.206 "compare": false, 00:20:14.206 "compare_and_write": false, 00:20:14.206 "abort": false, 00:20:14.206 "nvme_admin": false, 00:20:14.206 "nvme_io": false 00:20:14.206 }, 00:20:14.206 "driver_specific": { 00:20:14.206 "lvol": { 00:20:14.206 "lvol_store_uuid": "0f2f8dab-a8ff-4851-87ad-068a9590fb27", 00:20:14.206 "base_bdev": "nvme0n1", 00:20:14.206 "thin_provision": true, 00:20:14.206 "num_allocated_clusters": 0, 00:20:14.206 "snapshot": false, 00:20:14.206 "clone": false, 00:20:14.206 "esnap_clone": false 00:20:14.206 } 00:20:14.206 } 00:20:14.206 } 00:20:14.206 ]' 00:20:14.206 00:24:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1379 -- # jq '.[] .block_size' 00:20:14.206 00:24:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1379 -- # bs=4096 00:20:14.206 00:24:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # jq '.[] .num_blocks' 00:20:14.206 00:24:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # nb=26476544 00:20:14.206 00:24:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # bdev_size=103424 00:20:14.206 00:24:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # echo 103424 00:20:14.206 00:24:28 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # l2p_dram_size_mb=10 00:20:14.206 00:24:28 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@56 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 118b8449-7d65-442e-b2c8-e861ab66175d --l2p_dram_limit 10' 00:20:14.206 00:24:28 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@58 -- # '[' -n '' ']' 00:20:14.207 00:24:28 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # '[' -n 0000:00:10.0 ']' 00:20:14.207 00:24:28 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # ftl_construct_args+=' -c nvc0n1p0' 00:20:14.207 00:24:28 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 118b8449-7d65-442e-b2c8-e861ab66175d --l2p_dram_limit 10 -c nvc0n1p0 00:20:14.473 [2024-07-23 00:24:28.901557] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.473 [2024-07-23 00:24:28.901609] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:14.473 [2024-07-23 00:24:28.901630] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:14.473 [2024-07-23 00:24:28.901641] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.473 [2024-07-23 00:24:28.901710] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.473 [2024-07-23 00:24:28.901728] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:14.473 [2024-07-23 00:24:28.901749] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:20:14.473 [2024-07-23 00:24:28.901762] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.473 [2024-07-23 00:24:28.901789] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:14.473 [2024-07-23 00:24:28.902070] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:14.473 [2024-07-23 00:24:28.902093] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.473 [2024-07-23 00:24:28.902106] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:14.473 [2024-07-23 00:24:28.902120] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.312 ms 00:20:14.473 [2024-07-23 00:24:28.902130] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.473 [2024-07-23 00:24:28.902206] mngt/ftl_mngt_md.c: 568:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 13606ebe-bc72-4516-98d6-72c7480a66e3 00:20:14.473 [2024-07-23 00:24:28.903600] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.473 [2024-07-23 00:24:28.903629] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:20:14.473 [2024-07-23 00:24:28.903641] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:20:14.473 [2024-07-23 00:24:28.903666] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.473 [2024-07-23 00:24:28.911102] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.473 [2024-07-23 00:24:28.911136] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:14.473 [2024-07-23 00:24:28.911149] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.406 ms 00:20:14.473 [2024-07-23 00:24:28.911178] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.473 [2024-07-23 00:24:28.911260] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.473 [2024-07-23 00:24:28.911292] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:14.473 [2024-07-23 00:24:28.911304] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:20:14.473 [2024-07-23 00:24:28.911317] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.473 [2024-07-23 00:24:28.911387] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.473 [2024-07-23 00:24:28.911403] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:14.473 [2024-07-23 00:24:28.911414] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:20:14.473 [2024-07-23 00:24:28.911426] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.473 [2024-07-23 00:24:28.911451] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:14.473 [2024-07-23 00:24:28.913285] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.473 [2024-07-23 00:24:28.913310] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:14.473 [2024-07-23 00:24:28.913324] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.841 ms 00:20:14.473 [2024-07-23 00:24:28.913334] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.473 [2024-07-23 00:24:28.913372] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.473 [2024-07-23 00:24:28.913383] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:14.473 [2024-07-23 00:24:28.913396] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:20:14.473 [2024-07-23 00:24:28.913405] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.473 [2024-07-23 00:24:28.913430] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:20:14.473 [2024-07-23 00:24:28.913575] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:20:14.473 [2024-07-23 00:24:28.913593] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:14.473 [2024-07-23 00:24:28.913607] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:20:14.473 [2024-07-23 00:24:28.913623] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:14.473 [2024-07-23 00:24:28.913635] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:14.473 [2024-07-23 00:24:28.913648] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:20:14.473 [2024-07-23 00:24:28.913658] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:14.473 [2024-07-23 00:24:28.913674] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:20:14.473 [2024-07-23 00:24:28.913684] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:20:14.473 [2024-07-23 00:24:28.913697] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.473 [2024-07-23 00:24:28.913708] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:14.473 [2024-07-23 00:24:28.913728] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.272 ms 00:20:14.473 [2024-07-23 00:24:28.913738] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.473 [2024-07-23 00:24:28.913811] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.473 [2024-07-23 00:24:28.913821] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:14.473 [2024-07-23 00:24:28.913837] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:20:14.473 [2024-07-23 00:24:28.913847] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.473 [2024-07-23 00:24:28.913938] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:14.474 [2024-07-23 00:24:28.913951] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:14.474 [2024-07-23 00:24:28.913966] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:14.474 [2024-07-23 00:24:28.913977] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:14.474 [2024-07-23 00:24:28.913989] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:14.474 [2024-07-23 00:24:28.913999] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:14.474 [2024-07-23 00:24:28.914012] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:20:14.474 [2024-07-23 00:24:28.914022] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:14.474 [2024-07-23 00:24:28.914034] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:20:14.474 [2024-07-23 00:24:28.914043] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:14.474 [2024-07-23 00:24:28.914055] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:14.474 [2024-07-23 00:24:28.914066] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:20:14.474 [2024-07-23 00:24:28.914077] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:14.474 [2024-07-23 00:24:28.914087] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:14.474 [2024-07-23 00:24:28.914102] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:20:14.474 [2024-07-23 00:24:28.914113] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:14.474 [2024-07-23 00:24:28.914124] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:14.474 [2024-07-23 00:24:28.914134] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:20:14.474 [2024-07-23 00:24:28.914146] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:14.474 [2024-07-23 00:24:28.914155] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:14.474 [2024-07-23 00:24:28.914168] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:20:14.474 [2024-07-23 00:24:28.914177] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:14.474 [2024-07-23 00:24:28.914188] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:14.474 [2024-07-23 00:24:28.914198] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:20:14.474 [2024-07-23 00:24:28.914210] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:14.474 [2024-07-23 00:24:28.914219] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:14.474 [2024-07-23 00:24:28.914242] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:20:14.474 [2024-07-23 00:24:28.914250] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:14.474 [2024-07-23 00:24:28.914261] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:14.474 [2024-07-23 00:24:28.914447] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:20:14.474 [2024-07-23 00:24:28.914503] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:14.474 [2024-07-23 00:24:28.914536] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:14.474 [2024-07-23 00:24:28.914568] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:20:14.474 [2024-07-23 00:24:28.914596] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:14.474 [2024-07-23 00:24:28.914627] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:14.474 [2024-07-23 00:24:28.914656] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:20:14.474 [2024-07-23 00:24:28.914687] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:14.474 [2024-07-23 00:24:28.914716] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:20:14.474 [2024-07-23 00:24:28.914797] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:20:14.474 [2024-07-23 00:24:28.914831] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:14.474 [2024-07-23 00:24:28.914863] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:20:14.474 [2024-07-23 00:24:28.914892] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:20:14.474 [2024-07-23 00:24:28.914922] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:14.474 [2024-07-23 00:24:28.914951] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:14.474 [2024-07-23 00:24:28.914984] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:14.474 [2024-07-23 00:24:28.915014] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:14.474 [2024-07-23 00:24:28.915057] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:14.474 [2024-07-23 00:24:28.915127] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:14.474 [2024-07-23 00:24:28.915216] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:14.474 [2024-07-23 00:24:28.915293] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:14.474 [2024-07-23 00:24:28.915334] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:14.474 [2024-07-23 00:24:28.915364] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:14.474 [2024-07-23 00:24:28.915398] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:14.474 [2024-07-23 00:24:28.915434] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:14.474 [2024-07-23 00:24:28.915495] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:14.474 [2024-07-23 00:24:28.915603] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:20:14.474 [2024-07-23 00:24:28.915656] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:20:14.474 [2024-07-23 00:24:28.915702] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:20:14.474 [2024-07-23 00:24:28.915751] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:20:14.474 [2024-07-23 00:24:28.915833] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:20:14.474 [2024-07-23 00:24:28.915887] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:20:14.474 [2024-07-23 00:24:28.915900] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:20:14.474 [2024-07-23 00:24:28.915916] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:20:14.474 [2024-07-23 00:24:28.915927] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:20:14.474 [2024-07-23 00:24:28.915940] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:20:14.474 [2024-07-23 00:24:28.915950] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:20:14.474 [2024-07-23 00:24:28.915963] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:20:14.474 [2024-07-23 00:24:28.915974] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:20:14.474 [2024-07-23 00:24:28.915987] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:20:14.474 [2024-07-23 00:24:28.915998] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:14.474 [2024-07-23 00:24:28.916021] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:14.474 [2024-07-23 00:24:28.916033] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:14.474 [2024-07-23 00:24:28.916046] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:14.474 [2024-07-23 00:24:28.916057] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:14.474 [2024-07-23 00:24:28.916070] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:14.474 [2024-07-23 00:24:28.916083] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.474 [2024-07-23 00:24:28.916096] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:14.474 [2024-07-23 00:24:28.916110] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.201 ms 00:20:14.474 [2024-07-23 00:24:28.916125] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.474 [2024-07-23 00:24:28.916193] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:20:14.474 [2024-07-23 00:24:28.916210] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:20:17.757 [2024-07-23 00:24:32.118120] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.757 [2024-07-23 00:24:32.118204] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:20:17.757 [2024-07-23 00:24:32.118221] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3207.123 ms 00:20:17.757 [2024-07-23 00:24:32.118235] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.757 [2024-07-23 00:24:32.129526] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.757 [2024-07-23 00:24:32.129578] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:17.757 [2024-07-23 00:24:32.129595] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.168 ms 00:20:17.757 [2024-07-23 00:24:32.129608] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.757 [2024-07-23 00:24:32.129745] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.757 [2024-07-23 00:24:32.129767] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:17.757 [2024-07-23 00:24:32.129780] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:20:17.757 [2024-07-23 00:24:32.129792] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.757 [2024-07-23 00:24:32.140314] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.757 [2024-07-23 00:24:32.140360] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:17.757 [2024-07-23 00:24:32.140375] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.496 ms 00:20:17.758 [2024-07-23 00:24:32.140387] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.758 [2024-07-23 00:24:32.140424] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.758 [2024-07-23 00:24:32.140437] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:17.758 [2024-07-23 00:24:32.140448] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:17.758 [2024-07-23 00:24:32.140461] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.758 [2024-07-23 00:24:32.140939] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.758 [2024-07-23 00:24:32.140956] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:17.758 [2024-07-23 00:24:32.140968] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.429 ms 00:20:17.758 [2024-07-23 00:24:32.140989] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.758 [2024-07-23 00:24:32.141096] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.758 [2024-07-23 00:24:32.141118] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:17.758 [2024-07-23 00:24:32.141135] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.085 ms 00:20:17.758 [2024-07-23 00:24:32.141150] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.758 [2024-07-23 00:24:32.148341] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.758 [2024-07-23 00:24:32.148381] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:17.758 [2024-07-23 00:24:32.148403] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.181 ms 00:20:17.758 [2024-07-23 00:24:32.148416] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.758 [2024-07-23 00:24:32.156023] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:20:17.758 [2024-07-23 00:24:32.159252] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.758 [2024-07-23 00:24:32.159289] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:17.758 [2024-07-23 00:24:32.159307] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.779 ms 00:20:17.758 [2024-07-23 00:24:32.159317] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.758 [2024-07-23 00:24:32.237466] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.758 [2024-07-23 00:24:32.237533] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:20:17.758 [2024-07-23 00:24:32.237552] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 78.235 ms 00:20:17.758 [2024-07-23 00:24:32.237575] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.758 [2024-07-23 00:24:32.237762] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.758 [2024-07-23 00:24:32.237775] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:17.758 [2024-07-23 00:24:32.237789] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.137 ms 00:20:17.758 [2024-07-23 00:24:32.237800] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.758 [2024-07-23 00:24:32.241424] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.758 [2024-07-23 00:24:32.241461] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:20:17.758 [2024-07-23 00:24:32.241477] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.602 ms 00:20:17.758 [2024-07-23 00:24:32.241490] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.758 [2024-07-23 00:24:32.244458] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.758 [2024-07-23 00:24:32.244491] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:20:17.758 [2024-07-23 00:24:32.244507] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.928 ms 00:20:17.758 [2024-07-23 00:24:32.244517] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.758 [2024-07-23 00:24:32.244778] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.758 [2024-07-23 00:24:32.244793] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:17.758 [2024-07-23 00:24:32.244806] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.224 ms 00:20:17.758 [2024-07-23 00:24:32.244816] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.758 [2024-07-23 00:24:32.281917] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.758 [2024-07-23 00:24:32.281973] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:20:17.758 [2024-07-23 00:24:32.281991] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.128 ms 00:20:17.758 [2024-07-23 00:24:32.282005] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.758 [2024-07-23 00:24:32.286329] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.758 [2024-07-23 00:24:32.286365] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:20:17.758 [2024-07-23 00:24:32.286381] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.288 ms 00:20:17.758 [2024-07-23 00:24:32.286392] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.758 [2024-07-23 00:24:32.289568] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.758 [2024-07-23 00:24:32.289601] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:20:17.758 [2024-07-23 00:24:32.289616] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.139 ms 00:20:17.758 [2024-07-23 00:24:32.289625] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.758 [2024-07-23 00:24:32.293469] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.758 [2024-07-23 00:24:32.293501] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:17.758 [2024-07-23 00:24:32.293517] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.808 ms 00:20:17.758 [2024-07-23 00:24:32.293527] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.758 [2024-07-23 00:24:32.293582] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.758 [2024-07-23 00:24:32.293595] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:17.758 [2024-07-23 00:24:32.293609] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:20:17.758 [2024-07-23 00:24:32.293619] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.758 [2024-07-23 00:24:32.293685] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.758 [2024-07-23 00:24:32.293696] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:17.758 [2024-07-23 00:24:32.293709] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:20:17.758 [2024-07-23 00:24:32.293719] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.758 [2024-07-23 00:24:32.294742] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3398.287 ms, result 0 00:20:17.758 { 00:20:17.758 "name": "ftl0", 00:20:17.758 "uuid": "13606ebe-bc72-4516-98d6-72c7480a66e3" 00:20:17.758 } 00:20:17.758 00:24:32 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@64 -- # echo '{"subsystems": [' 00:20:17.758 00:24:32 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:20:18.017 00:24:32 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@66 -- # echo ']}' 00:20:18.017 00:24:32 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@70 -- # modprobe nbd 00:20:18.017 00:24:32 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_start_disk ftl0 /dev/nbd0 00:20:18.275 /dev/nbd0 00:20:18.275 00:24:32 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@72 -- # waitfornbd nbd0 00:20:18.275 00:24:32 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:20:18.275 00:24:32 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@865 -- # local i 00:20:18.275 00:24:32 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:20:18.275 00:24:32 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:20:18.275 00:24:32 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:20:18.275 00:24:32 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@869 -- # break 00:20:18.275 00:24:32 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:20:18.275 00:24:32 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:20:18.275 00:24:32 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/ftl/nbdtest bs=4096 count=1 iflag=direct 00:20:18.275 1+0 records in 00:20:18.275 1+0 records out 00:20:18.275 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000227071 s, 18.0 MB/s 00:20:18.275 00:24:32 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:20:18.275 00:24:32 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@882 -- # size=4096 00:20:18.275 00:24:32 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:20:18.275 00:24:32 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:20:18.275 00:24:32 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@885 -- # return 0 00:20:18.275 00:24:32 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --bs=4096 --count=262144 00:20:18.275 [2024-07-23 00:24:32.822481] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:20:18.275 [2024-07-23 00:24:32.822619] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91487 ] 00:20:18.533 [2024-07-23 00:24:32.972355] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:18.533 [2024-07-23 00:24:33.016939] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:23.535  Copying: 213/1024 [MB] (213 MBps) Copying: 426/1024 [MB] (213 MBps) Copying: 639/1024 [MB] (213 MBps) Copying: 844/1024 [MB] (205 MBps) Copying: 1024/1024 [MB] (average 210 MBps) 00:20:23.535 00:20:23.535 00:24:38 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@76 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:20:25.436 00:24:39 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@77 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --of=/dev/nbd0 --bs=4096 --count=262144 --oflag=direct 00:20:25.436 [2024-07-23 00:24:39.952279] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:20:25.436 [2024-07-23 00:24:39.952400] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91563 ] 00:20:25.436 [2024-07-23 00:24:40.105621] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:25.694 [2024-07-23 00:24:40.149386] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:23.883  Copying: 18/1024 [MB] (18 MBps) Copying: 37/1024 [MB] (18 MBps) Copying: 55/1024 [MB] (17 MBps) Copying: 72/1024 [MB] (17 MBps) Copying: 89/1024 [MB] (17 MBps) Copying: 107/1024 [MB] (17 MBps) Copying: 126/1024 [MB] (18 MBps) Copying: 144/1024 [MB] (18 MBps) Copying: 163/1024 [MB] (18 MBps) Copying: 182/1024 [MB] (18 MBps) Copying: 200/1024 [MB] (18 MBps) Copying: 219/1024 [MB] (18 MBps) Copying: 237/1024 [MB] (18 MBps) Copying: 255/1024 [MB] (17 MBps) Copying: 273/1024 [MB] (17 MBps) Copying: 290/1024 [MB] (17 MBps) Copying: 308/1024 [MB] (17 MBps) Copying: 325/1024 [MB] (17 MBps) Copying: 342/1024 [MB] (16 MBps) Copying: 359/1024 [MB] (17 MBps) Copying: 376/1024 [MB] (17 MBps) Copying: 393/1024 [MB] (17 MBps) Copying: 411/1024 [MB] (17 MBps) Copying: 428/1024 [MB] (17 MBps) Copying: 445/1024 [MB] (17 MBps) Copying: 463/1024 [MB] (17 MBps) Copying: 480/1024 [MB] (17 MBps) Copying: 497/1024 [MB] (17 MBps) Copying: 514/1024 [MB] (17 MBps) Copying: 531/1024 [MB] (17 MBps) Copying: 549/1024 [MB] (17 MBps) Copying: 566/1024 [MB] (17 MBps) Copying: 584/1024 [MB] (17 MBps) Copying: 601/1024 [MB] (16 MBps) Copying: 618/1024 [MB] (17 MBps) Copying: 636/1024 [MB] (18 MBps) Copying: 655/1024 [MB] (18 MBps) Copying: 673/1024 [MB] (18 MBps) Copying: 690/1024 [MB] (17 MBps) Copying: 708/1024 [MB] (17 MBps) Copying: 725/1024 [MB] (17 MBps) Copying: 742/1024 [MB] (17 MBps) Copying: 759/1024 [MB] (17 MBps) Copying: 776/1024 [MB] (17 MBps) Copying: 794/1024 [MB] (17 MBps) Copying: 812/1024 [MB] (17 MBps) Copying: 829/1024 [MB] (17 MBps) Copying: 846/1024 [MB] (17 MBps) Copying: 864/1024 [MB] (17 MBps) Copying: 881/1024 [MB] (17 MBps) Copying: 898/1024 [MB] (17 MBps) Copying: 916/1024 [MB] (17 MBps) Copying: 934/1024 [MB] (17 MBps) Copying: 951/1024 [MB] (17 MBps) Copying: 969/1024 [MB] (17 MBps) Copying: 986/1024 [MB] (17 MBps) Copying: 1003/1024 [MB] (17 MBps) Copying: 1021/1024 [MB] (17 MBps) Copying: 1024/1024 [MB] (average 17 MBps) 00:21:23.883 00:21:23.883 00:25:38 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@78 -- # sync /dev/nbd0 00:21:23.883 00:25:38 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_stop_disk /dev/nbd0 00:21:24.141 00:25:38 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:21:24.401 [2024-07-23 00:25:38.875342] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:24.401 [2024-07-23 00:25:38.875403] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:21:24.401 [2024-07-23 00:25:38.875435] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:21:24.401 [2024-07-23 00:25:38.875462] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.401 [2024-07-23 00:25:38.875489] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:21:24.401 [2024-07-23 00:25:38.876174] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:24.401 [2024-07-23 00:25:38.876187] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:21:24.401 [2024-07-23 00:25:38.876203] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.666 ms 00:21:24.401 [2024-07-23 00:25:38.876221] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.401 [2024-07-23 00:25:38.878368] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:24.401 [2024-07-23 00:25:38.878408] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:21:24.401 [2024-07-23 00:25:38.878427] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.120 ms 00:21:24.401 [2024-07-23 00:25:38.878438] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.401 [2024-07-23 00:25:38.896156] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:24.401 [2024-07-23 00:25:38.896200] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:21:24.401 [2024-07-23 00:25:38.896221] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.717 ms 00:21:24.401 [2024-07-23 00:25:38.896231] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.401 [2024-07-23 00:25:38.901254] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:24.401 [2024-07-23 00:25:38.901293] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:21:24.401 [2024-07-23 00:25:38.901309] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.972 ms 00:21:24.401 [2024-07-23 00:25:38.901320] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.401 [2024-07-23 00:25:38.903200] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:24.401 [2024-07-23 00:25:38.903236] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:21:24.401 [2024-07-23 00:25:38.903255] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.787 ms 00:21:24.401 [2024-07-23 00:25:38.903276] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.401 [2024-07-23 00:25:38.907999] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:24.401 [2024-07-23 00:25:38.908039] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:21:24.401 [2024-07-23 00:25:38.908055] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.688 ms 00:21:24.401 [2024-07-23 00:25:38.908070] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.401 [2024-07-23 00:25:38.908219] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:24.401 [2024-07-23 00:25:38.908240] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:21:24.401 [2024-07-23 00:25:38.908282] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.096 ms 00:21:24.401 [2024-07-23 00:25:38.908297] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.401 [2024-07-23 00:25:38.910450] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:24.401 [2024-07-23 00:25:38.910485] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:21:24.401 [2024-07-23 00:25:38.910501] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.123 ms 00:21:24.401 [2024-07-23 00:25:38.910511] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.401 [2024-07-23 00:25:38.912045] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:24.401 [2024-07-23 00:25:38.912080] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:21:24.401 [2024-07-23 00:25:38.912099] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.493 ms 00:21:24.401 [2024-07-23 00:25:38.912110] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.401 [2024-07-23 00:25:38.913279] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:24.401 [2024-07-23 00:25:38.913313] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:21:24.401 [2024-07-23 00:25:38.913328] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.102 ms 00:21:24.401 [2024-07-23 00:25:38.913349] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.401 [2024-07-23 00:25:38.914618] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:24.401 [2024-07-23 00:25:38.914657] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:21:24.401 [2024-07-23 00:25:38.914672] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.205 ms 00:21:24.401 [2024-07-23 00:25:38.914682] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.401 [2024-07-23 00:25:38.914723] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:21:24.401 [2024-07-23 00:25:38.914748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:21:24.401 [2024-07-23 00:25:38.914767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:21:24.401 [2024-07-23 00:25:38.914778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:21:24.402 [2024-07-23 00:25:38.914795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:21:24.402 [2024-07-23 00:25:38.914806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:21:24.402 [2024-07-23 00:25:38.914822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:21:24.402 [2024-07-23 00:25:38.914834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:21:24.402 [2024-07-23 00:25:38.914848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:21:24.402 [2024-07-23 00:25:38.914859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:21:24.402 [2024-07-23 00:25:38.914873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:21:24.402 [2024-07-23 00:25:38.914884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:21:24.402 [2024-07-23 00:25:38.914897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:21:24.402 [2024-07-23 00:25:38.914908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:21:24.402 [2024-07-23 00:25:38.914920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:21:24.402 [2024-07-23 00:25:38.914932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:21:24.402 [2024-07-23 00:25:38.914945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:21:24.402 [2024-07-23 00:25:38.914956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:21:24.402 [2024-07-23 00:25:38.914969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:21:24.402 [2024-07-23 00:25:38.914980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:21:24.402 [2024-07-23 00:25:38.914994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:21:24.402 [2024-07-23 00:25:38.915005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:21:24.402 [2024-07-23 00:25:38.915020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:21:24.402 [2024-07-23 00:25:38.915031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:21:24.402 [2024-07-23 00:25:38.915045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:21:24.402 [2024-07-23 00:25:38.915055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:21:24.402 [2024-07-23 00:25:38.915069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:21:24.402 [2024-07-23 00:25:38.915080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:21:24.402 [2024-07-23 00:25:38.915094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:21:24.402 [2024-07-23 00:25:38.915105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:21:24.402 [2024-07-23 00:25:38.915119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:21:24.402 [2024-07-23 00:25:38.915130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:21:24.402 [2024-07-23 00:25:38.915144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:21:24.402 [2024-07-23 00:25:38.915155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:21:24.402 [2024-07-23 00:25:38.915169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:21:24.402 [2024-07-23 00:25:38.915180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:21:24.402 [2024-07-23 00:25:38.915194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:21:24.402 [2024-07-23 00:25:38.915204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:21:24.402 [2024-07-23 00:25:38.915220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:21:24.402 [2024-07-23 00:25:38.915231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:21:24.402 [2024-07-23 00:25:38.915245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:21:24.402 [2024-07-23 00:25:38.915256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:21:24.402 [2024-07-23 00:25:38.915511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:21:24.402 [2024-07-23 00:25:38.915566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:21:24.402 [2024-07-23 00:25:38.915619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:21:24.402 [2024-07-23 00:25:38.915669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:21:24.402 [2024-07-23 00:25:38.915720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:21:24.402 [2024-07-23 00:25:38.915827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:21:24.402 [2024-07-23 00:25:38.915886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:21:24.402 [2024-07-23 00:25:38.915936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:21:24.402 [2024-07-23 00:25:38.915988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:21:24.402 [2024-07-23 00:25:38.916037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:21:24.402 [2024-07-23 00:25:38.916176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:21:24.402 [2024-07-23 00:25:38.916189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:21:24.402 [2024-07-23 00:25:38.916206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:21:24.402 [2024-07-23 00:25:38.916218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:21:24.402 [2024-07-23 00:25:38.916234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:21:24.402 [2024-07-23 00:25:38.916245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:21:24.402 [2024-07-23 00:25:38.916275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:21:24.402 [2024-07-23 00:25:38.916287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:21:24.402 [2024-07-23 00:25:38.916302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:21:24.402 [2024-07-23 00:25:38.916313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:21:24.402 [2024-07-23 00:25:38.916328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:21:24.402 [2024-07-23 00:25:38.916340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:21:24.402 [2024-07-23 00:25:38.916354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:21:24.402 [2024-07-23 00:25:38.916366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:21:24.402 [2024-07-23 00:25:38.916380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:21:24.402 [2024-07-23 00:25:38.916397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:21:24.402 [2024-07-23 00:25:38.916412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:21:24.402 [2024-07-23 00:25:38.916424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:21:24.402 [2024-07-23 00:25:38.916441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:21:24.402 [2024-07-23 00:25:38.916452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:21:24.402 [2024-07-23 00:25:38.916466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:21:24.402 [2024-07-23 00:25:38.916478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:21:24.402 [2024-07-23 00:25:38.916492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:21:24.402 [2024-07-23 00:25:38.916504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:21:24.402 [2024-07-23 00:25:38.916518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:21:24.402 [2024-07-23 00:25:38.916529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:21:24.402 [2024-07-23 00:25:38.916543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:21:24.402 [2024-07-23 00:25:38.916555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:21:24.402 [2024-07-23 00:25:38.916569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:21:24.402 [2024-07-23 00:25:38.916580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:21:24.402 [2024-07-23 00:25:38.916596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:21:24.402 [2024-07-23 00:25:38.916608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:21:24.402 [2024-07-23 00:25:38.916623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:21:24.402 [2024-07-23 00:25:38.916634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:21:24.402 [2024-07-23 00:25:38.916651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:21:24.402 [2024-07-23 00:25:38.916662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:21:24.402 [2024-07-23 00:25:38.916676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:21:24.402 [2024-07-23 00:25:38.916687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:21:24.402 [2024-07-23 00:25:38.916702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:21:24.402 [2024-07-23 00:25:38.916713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:21:24.403 [2024-07-23 00:25:38.916727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:21:24.403 [2024-07-23 00:25:38.916738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:21:24.403 [2024-07-23 00:25:38.916752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:21:24.403 [2024-07-23 00:25:38.916764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:21:24.403 [2024-07-23 00:25:38.916778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:21:24.403 [2024-07-23 00:25:38.916790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:21:24.403 [2024-07-23 00:25:38.916804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:21:24.403 [2024-07-23 00:25:38.916815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:21:24.403 [2024-07-23 00:25:38.916830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:21:24.403 [2024-07-23 00:25:38.916850] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:21:24.403 [2024-07-23 00:25:38.916866] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 13606ebe-bc72-4516-98d6-72c7480a66e3 00:21:24.403 [2024-07-23 00:25:38.916878] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:21:24.403 [2024-07-23 00:25:38.916891] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:21:24.403 [2024-07-23 00:25:38.916902] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:21:24.403 [2024-07-23 00:25:38.916916] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:21:24.403 [2024-07-23 00:25:38.916926] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:21:24.403 [2024-07-23 00:25:38.916941] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:21:24.403 [2024-07-23 00:25:38.916951] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:21:24.403 [2024-07-23 00:25:38.916975] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:21:24.403 [2024-07-23 00:25:38.916986] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:21:24.403 [2024-07-23 00:25:38.917000] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:24.403 [2024-07-23 00:25:38.917011] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:21:24.403 [2024-07-23 00:25:38.917027] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.284 ms 00:21:24.403 [2024-07-23 00:25:38.917040] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.403 [2024-07-23 00:25:38.918868] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:24.403 [2024-07-23 00:25:38.918890] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:21:24.403 [2024-07-23 00:25:38.918916] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.799 ms 00:21:24.403 [2024-07-23 00:25:38.918927] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.403 [2024-07-23 00:25:38.919037] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:24.403 [2024-07-23 00:25:38.919057] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:21:24.403 [2024-07-23 00:25:38.919072] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.085 ms 00:21:24.403 [2024-07-23 00:25:38.919092] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.403 [2024-07-23 00:25:38.926217] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:24.403 [2024-07-23 00:25:38.926273] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:24.403 [2024-07-23 00:25:38.926290] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:24.403 [2024-07-23 00:25:38.926302] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.403 [2024-07-23 00:25:38.926359] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:24.403 [2024-07-23 00:25:38.926374] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:24.403 [2024-07-23 00:25:38.926388] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:24.403 [2024-07-23 00:25:38.926399] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.403 [2024-07-23 00:25:38.926491] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:24.403 [2024-07-23 00:25:38.926505] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:24.403 [2024-07-23 00:25:38.926527] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:24.403 [2024-07-23 00:25:38.926537] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.403 [2024-07-23 00:25:38.926568] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:24.403 [2024-07-23 00:25:38.926580] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:24.403 [2024-07-23 00:25:38.926600] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:24.403 [2024-07-23 00:25:38.926614] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.403 [2024-07-23 00:25:38.939469] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:24.403 [2024-07-23 00:25:38.939521] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:24.403 [2024-07-23 00:25:38.939539] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:24.403 [2024-07-23 00:25:38.939566] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.403 [2024-07-23 00:25:38.947814] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:24.403 [2024-07-23 00:25:38.947851] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:24.403 [2024-07-23 00:25:38.947871] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:24.403 [2024-07-23 00:25:38.947885] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.403 [2024-07-23 00:25:38.947964] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:24.403 [2024-07-23 00:25:38.947976] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:24.403 [2024-07-23 00:25:38.947993] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:24.403 [2024-07-23 00:25:38.948012] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.403 [2024-07-23 00:25:38.948057] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:24.403 [2024-07-23 00:25:38.948069] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:24.403 [2024-07-23 00:25:38.948082] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:24.403 [2024-07-23 00:25:38.948092] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.403 [2024-07-23 00:25:38.948181] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:24.403 [2024-07-23 00:25:38.948194] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:24.403 [2024-07-23 00:25:38.948208] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:24.403 [2024-07-23 00:25:38.948219] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.403 [2024-07-23 00:25:38.948275] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:24.403 [2024-07-23 00:25:38.948288] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:21:24.403 [2024-07-23 00:25:38.948302] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:24.403 [2024-07-23 00:25:38.948313] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.403 [2024-07-23 00:25:38.948377] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:24.403 [2024-07-23 00:25:38.948389] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:24.403 [2024-07-23 00:25:38.948405] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:24.403 [2024-07-23 00:25:38.948415] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.403 [2024-07-23 00:25:38.948472] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:24.403 [2024-07-23 00:25:38.948486] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:24.403 [2024-07-23 00:25:38.948500] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:24.403 [2024-07-23 00:25:38.948510] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.403 [2024-07-23 00:25:38.948651] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 73.391 ms, result 0 00:21:24.403 true 00:21:24.403 00:25:38 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@83 -- # kill -9 91356 00:21:24.403 00:25:38 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@84 -- # rm -f /dev/shm/spdk_tgt_trace.pid91356 00:21:24.403 00:25:38 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --bs=4096 --count=262144 00:21:24.403 [2024-07-23 00:25:39.068144] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:21:24.403 [2024-07-23 00:25:39.068459] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92172 ] 00:21:24.662 [2024-07-23 00:25:39.220045] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:24.662 [2024-07-23 00:25:39.266324] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:29.827  Copying: 205/1024 [MB] (205 MBps) Copying: 416/1024 [MB] (210 MBps) Copying: 627/1024 [MB] (210 MBps) Copying: 837/1024 [MB] (210 MBps) Copying: 1024/1024 [MB] (average 209 MBps) 00:21:29.827 00:21:29.827 /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh: line 87: 91356 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x1 00:21:29.827 00:25:44 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --ob=ftl0 --count=262144 --seek=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:21:30.085 [2024-07-23 00:25:44.534981] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:21:30.085 [2024-07-23 00:25:44.535115] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92238 ] 00:21:30.085 [2024-07-23 00:25:44.687231] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:30.085 [2024-07-23 00:25:44.730314] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:30.344 [2024-07-23 00:25:44.832431] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:30.344 [2024-07-23 00:25:44.832508] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:30.344 [2024-07-23 00:25:44.894378] blobstore.c:4865:bs_recover: *NOTICE*: Performing recovery on blobstore 00:21:30.344 [2024-07-23 00:25:44.894704] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:21:30.344 [2024-07-23 00:25:44.894949] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:21:30.604 [2024-07-23 00:25:45.199125] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:30.604 [2024-07-23 00:25:45.199183] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:21:30.604 [2024-07-23 00:25:45.199198] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:21:30.604 [2024-07-23 00:25:45.199209] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:30.604 [2024-07-23 00:25:45.199286] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:30.604 [2024-07-23 00:25:45.199306] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:30.604 [2024-07-23 00:25:45.199318] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:21:30.604 [2024-07-23 00:25:45.199327] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:30.604 [2024-07-23 00:25:45.199354] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:21:30.604 [2024-07-23 00:25:45.199562] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:21:30.604 [2024-07-23 00:25:45.199583] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:30.604 [2024-07-23 00:25:45.199594] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:30.604 [2024-07-23 00:25:45.199613] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.236 ms 00:21:30.604 [2024-07-23 00:25:45.199630] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:30.604 [2024-07-23 00:25:45.201009] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:21:30.604 [2024-07-23 00:25:45.203537] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:30.604 [2024-07-23 00:25:45.203572] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:21:30.604 [2024-07-23 00:25:45.203585] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.534 ms 00:21:30.604 [2024-07-23 00:25:45.203596] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:30.604 [2024-07-23 00:25:45.203652] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:30.604 [2024-07-23 00:25:45.203665] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:21:30.604 [2024-07-23 00:25:45.203679] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:21:30.604 [2024-07-23 00:25:45.203689] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:30.604 [2024-07-23 00:25:45.210467] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:30.604 [2024-07-23 00:25:45.210587] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:30.604 [2024-07-23 00:25:45.210733] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.730 ms 00:21:30.604 [2024-07-23 00:25:45.210771] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:30.604 [2024-07-23 00:25:45.210891] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:30.604 [2024-07-23 00:25:45.210927] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:30.604 [2024-07-23 00:25:45.211027] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.076 ms 00:21:30.604 [2024-07-23 00:25:45.211073] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:30.604 [2024-07-23 00:25:45.211160] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:30.604 [2024-07-23 00:25:45.211203] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:21:30.604 [2024-07-23 00:25:45.211233] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:21:30.604 [2024-07-23 00:25:45.211328] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:30.604 [2024-07-23 00:25:45.211396] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:21:30.604 [2024-07-23 00:25:45.213399] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:30.605 [2024-07-23 00:25:45.213513] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:30.605 [2024-07-23 00:25:45.213592] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.012 ms 00:21:30.605 [2024-07-23 00:25:45.213633] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:30.605 [2024-07-23 00:25:45.213694] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:30.605 [2024-07-23 00:25:45.213726] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:21:30.605 [2024-07-23 00:25:45.213756] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:21:30.605 [2024-07-23 00:25:45.213847] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:30.605 [2024-07-23 00:25:45.213899] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:21:30.605 [2024-07-23 00:25:45.213947] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:21:30.605 [2024-07-23 00:25:45.214024] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:21:30.605 [2024-07-23 00:25:45.214163] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:21:30.605 [2024-07-23 00:25:45.214250] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:21:30.605 [2024-07-23 00:25:45.214296] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:21:30.605 [2024-07-23 00:25:45.214310] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:21:30.605 [2024-07-23 00:25:45.214323] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:21:30.605 [2024-07-23 00:25:45.214335] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:21:30.605 [2024-07-23 00:25:45.214347] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:21:30.605 [2024-07-23 00:25:45.214356] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:21:30.605 [2024-07-23 00:25:45.214366] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:21:30.605 [2024-07-23 00:25:45.214386] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:21:30.605 [2024-07-23 00:25:45.214396] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:30.605 [2024-07-23 00:25:45.214406] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:21:30.605 [2024-07-23 00:25:45.214417] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.501 ms 00:21:30.605 [2024-07-23 00:25:45.214426] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:30.605 [2024-07-23 00:25:45.214502] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:30.605 [2024-07-23 00:25:45.214513] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:21:30.605 [2024-07-23 00:25:45.214523] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:21:30.605 [2024-07-23 00:25:45.214539] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:30.605 [2024-07-23 00:25:45.214630] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:21:30.605 [2024-07-23 00:25:45.214644] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:21:30.605 [2024-07-23 00:25:45.214655] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:30.605 [2024-07-23 00:25:45.214665] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:30.605 [2024-07-23 00:25:45.214675] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:21:30.605 [2024-07-23 00:25:45.214684] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:21:30.605 [2024-07-23 00:25:45.214694] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:21:30.605 [2024-07-23 00:25:45.214702] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:21:30.605 [2024-07-23 00:25:45.214712] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:21:30.605 [2024-07-23 00:25:45.214722] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:30.605 [2024-07-23 00:25:45.214731] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:21:30.605 [2024-07-23 00:25:45.214748] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:21:30.605 [2024-07-23 00:25:45.214757] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:30.605 [2024-07-23 00:25:45.214766] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:21:30.605 [2024-07-23 00:25:45.214775] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:21:30.605 [2024-07-23 00:25:45.214784] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:30.605 [2024-07-23 00:25:45.214793] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:21:30.605 [2024-07-23 00:25:45.214805] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:21:30.605 [2024-07-23 00:25:45.214815] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:30.605 [2024-07-23 00:25:45.214824] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:21:30.605 [2024-07-23 00:25:45.214833] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:21:30.605 [2024-07-23 00:25:45.214842] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:30.605 [2024-07-23 00:25:45.214851] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:21:30.605 [2024-07-23 00:25:45.214861] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:21:30.605 [2024-07-23 00:25:45.214870] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:30.605 [2024-07-23 00:25:45.214879] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:21:30.605 [2024-07-23 00:25:45.214888] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:21:30.605 [2024-07-23 00:25:45.214905] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:30.605 [2024-07-23 00:25:45.214915] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:21:30.605 [2024-07-23 00:25:45.214925] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:21:30.605 [2024-07-23 00:25:45.214934] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:30.605 [2024-07-23 00:25:45.214943] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:21:30.605 [2024-07-23 00:25:45.214953] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:21:30.605 [2024-07-23 00:25:45.214962] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:30.605 [2024-07-23 00:25:45.214970] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:21:30.605 [2024-07-23 00:25:45.214979] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:21:30.605 [2024-07-23 00:25:45.214988] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:30.605 [2024-07-23 00:25:45.214997] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:21:30.605 [2024-07-23 00:25:45.215005] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:21:30.605 [2024-07-23 00:25:45.215014] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:30.605 [2024-07-23 00:25:45.215022] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:21:30.605 [2024-07-23 00:25:45.215031] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:21:30.605 [2024-07-23 00:25:45.215040] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:30.605 [2024-07-23 00:25:45.215051] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:21:30.605 [2024-07-23 00:25:45.215061] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:21:30.605 [2024-07-23 00:25:45.215070] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:30.605 [2024-07-23 00:25:45.215079] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:30.605 [2024-07-23 00:25:45.215096] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:21:30.605 [2024-07-23 00:25:45.215105] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:21:30.605 [2024-07-23 00:25:45.215114] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:21:30.605 [2024-07-23 00:25:45.215124] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:21:30.605 [2024-07-23 00:25:45.215133] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:21:30.605 [2024-07-23 00:25:45.215142] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:21:30.605 [2024-07-23 00:25:45.215153] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:21:30.605 [2024-07-23 00:25:45.215165] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:30.605 [2024-07-23 00:25:45.215177] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:21:30.605 [2024-07-23 00:25:45.215187] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:21:30.605 [2024-07-23 00:25:45.215197] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:21:30.605 [2024-07-23 00:25:45.215208] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:21:30.605 [2024-07-23 00:25:45.215221] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:21:30.605 [2024-07-23 00:25:45.215231] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:21:30.605 [2024-07-23 00:25:45.215241] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:21:30.605 [2024-07-23 00:25:45.215251] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:21:30.605 [2024-07-23 00:25:45.215274] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:21:30.605 [2024-07-23 00:25:45.215285] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:21:30.605 [2024-07-23 00:25:45.215294] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:21:30.605 [2024-07-23 00:25:45.215304] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:21:30.605 [2024-07-23 00:25:45.215314] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:21:30.605 [2024-07-23 00:25:45.215324] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:21:30.605 [2024-07-23 00:25:45.215344] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:21:30.606 [2024-07-23 00:25:45.215366] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:30.606 [2024-07-23 00:25:45.215378] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:21:30.606 [2024-07-23 00:25:45.215388] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:21:30.606 [2024-07-23 00:25:45.215398] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:21:30.606 [2024-07-23 00:25:45.215408] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:21:30.606 [2024-07-23 00:25:45.215422] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:30.606 [2024-07-23 00:25:45.215432] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:21:30.606 [2024-07-23 00:25:45.215442] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.841 ms 00:21:30.606 [2024-07-23 00:25:45.215452] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:30.606 [2024-07-23 00:25:45.238779] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:30.606 [2024-07-23 00:25:45.238952] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:30.606 [2024-07-23 00:25:45.239159] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.306 ms 00:21:30.606 [2024-07-23 00:25:45.239211] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:30.606 [2024-07-23 00:25:45.239452] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:30.606 [2024-07-23 00:25:45.239569] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:21:30.606 [2024-07-23 00:25:45.239654] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:21:30.606 [2024-07-23 00:25:45.239734] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:30.606 [2024-07-23 00:25:45.250758] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:30.606 [2024-07-23 00:25:45.250895] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:30.606 [2024-07-23 00:25:45.250984] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.935 ms 00:21:30.606 [2024-07-23 00:25:45.251026] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:30.606 [2024-07-23 00:25:45.251158] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:30.606 [2024-07-23 00:25:45.251245] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:30.606 [2024-07-23 00:25:45.251359] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:21:30.606 [2024-07-23 00:25:45.251420] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:30.606 [2024-07-23 00:25:45.252016] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:30.606 [2024-07-23 00:25:45.252303] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:30.606 [2024-07-23 00:25:45.252426] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.441 ms 00:21:30.606 [2024-07-23 00:25:45.252476] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:30.606 [2024-07-23 00:25:45.252685] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:30.606 [2024-07-23 00:25:45.252741] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:30.606 [2024-07-23 00:25:45.252835] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.114 ms 00:21:30.606 [2024-07-23 00:25:45.252882] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:30.606 [2024-07-23 00:25:45.259142] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:30.606 [2024-07-23 00:25:45.259280] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:30.606 [2024-07-23 00:25:45.259357] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.190 ms 00:21:30.606 [2024-07-23 00:25:45.259404] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:30.606 [2024-07-23 00:25:45.262140] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:21:30.606 [2024-07-23 00:25:45.262281] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:21:30.606 [2024-07-23 00:25:45.262304] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:30.606 [2024-07-23 00:25:45.262319] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:21:30.606 [2024-07-23 00:25:45.262334] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.738 ms 00:21:30.606 [2024-07-23 00:25:45.262349] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:30.606 [2024-07-23 00:25:45.275025] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:30.606 [2024-07-23 00:25:45.275065] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:21:30.606 [2024-07-23 00:25:45.275089] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.648 ms 00:21:30.606 [2024-07-23 00:25:45.275100] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:30.606 [2024-07-23 00:25:45.277175] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:30.606 [2024-07-23 00:25:45.277209] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:21:30.606 [2024-07-23 00:25:45.277221] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.032 ms 00:21:30.606 [2024-07-23 00:25:45.277232] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:30.606 [2024-07-23 00:25:45.278595] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:30.606 [2024-07-23 00:25:45.278627] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:21:30.606 [2024-07-23 00:25:45.278639] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.273 ms 00:21:30.606 [2024-07-23 00:25:45.278648] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:30.606 [2024-07-23 00:25:45.278951] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:30.606 [2024-07-23 00:25:45.278966] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:21:30.606 [2024-07-23 00:25:45.278978] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.238 ms 00:21:30.606 [2024-07-23 00:25:45.278987] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:30.866 [2024-07-23 00:25:45.299756] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:30.866 [2024-07-23 00:25:45.299986] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:21:30.866 [2024-07-23 00:25:45.300171] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.781 ms 00:21:30.866 [2024-07-23 00:25:45.300209] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:30.866 [2024-07-23 00:25:45.306604] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:21:30.866 [2024-07-23 00:25:45.310039] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:30.866 [2024-07-23 00:25:45.310163] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:21:30.866 [2024-07-23 00:25:45.310234] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.773 ms 00:21:30.866 [2024-07-23 00:25:45.310298] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:30.866 [2024-07-23 00:25:45.310419] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:30.866 [2024-07-23 00:25:45.310500] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:21:30.866 [2024-07-23 00:25:45.310526] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:21:30.866 [2024-07-23 00:25:45.310540] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:30.866 [2024-07-23 00:25:45.310620] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:30.866 [2024-07-23 00:25:45.310633] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:21:30.866 [2024-07-23 00:25:45.310644] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:21:30.866 [2024-07-23 00:25:45.310653] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:30.866 [2024-07-23 00:25:45.310679] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:30.866 [2024-07-23 00:25:45.310690] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:21:30.866 [2024-07-23 00:25:45.310714] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:21:30.866 [2024-07-23 00:25:45.310736] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:30.866 [2024-07-23 00:25:45.310765] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:21:30.866 [2024-07-23 00:25:45.310784] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:30.866 [2024-07-23 00:25:45.310805] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:21:30.866 [2024-07-23 00:25:45.310815] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:21:30.866 [2024-07-23 00:25:45.310825] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:30.866 [2024-07-23 00:25:45.314494] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:30.866 [2024-07-23 00:25:45.314538] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:21:30.866 [2024-07-23 00:25:45.314551] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.646 ms 00:21:30.866 [2024-07-23 00:25:45.314561] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:30.866 [2024-07-23 00:25:45.314630] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:30.866 [2024-07-23 00:25:45.314649] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:21:30.866 [2024-07-23 00:25:45.314660] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:21:30.866 [2024-07-23 00:25:45.314670] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:30.866 [2024-07-23 00:25:45.315771] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 116.408 ms, result 0 00:22:10.851  Copying: 26/1024 [MB] (26 MBps) Copying: 52/1024 [MB] (25 MBps) Copying: 79/1024 [MB] (26 MBps) Copying: 104/1024 [MB] (25 MBps) Copying: 130/1024 [MB] (25 MBps) Copying: 157/1024 [MB] (26 MBps) Copying: 183/1024 [MB] (26 MBps) Copying: 209/1024 [MB] (25 MBps) Copying: 235/1024 [MB] (25 MBps) Copying: 260/1024 [MB] (25 MBps) Copying: 286/1024 [MB] (25 MBps) Copying: 312/1024 [MB] (25 MBps) Copying: 338/1024 [MB] (25 MBps) Copying: 364/1024 [MB] (26 MBps) Copying: 390/1024 [MB] (25 MBps) Copying: 416/1024 [MB] (26 MBps) Copying: 442/1024 [MB] (25 MBps) Copying: 468/1024 [MB] (26 MBps) Copying: 494/1024 [MB] (25 MBps) Copying: 520/1024 [MB] (25 MBps) Copying: 546/1024 [MB] (25 MBps) Copying: 572/1024 [MB] (25 MBps) Copying: 597/1024 [MB] (25 MBps) Copying: 623/1024 [MB] (25 MBps) Copying: 648/1024 [MB] (25 MBps) Copying: 674/1024 [MB] (26 MBps) Copying: 701/1024 [MB] (26 MBps) Copying: 727/1024 [MB] (26 MBps) Copying: 753/1024 [MB] (25 MBps) Copying: 779/1024 [MB] (26 MBps) Copying: 805/1024 [MB] (25 MBps) Copying: 831/1024 [MB] (25 MBps) Copying: 857/1024 [MB] (26 MBps) Copying: 883/1024 [MB] (26 MBps) Copying: 909/1024 [MB] (26 MBps) Copying: 935/1024 [MB] (26 MBps) Copying: 961/1024 [MB] (25 MBps) Copying: 987/1024 [MB] (26 MBps) Copying: 1013/1024 [MB] (26 MBps) Copying: 1023/1024 [MB] (10 MBps) Copying: 1024/1024 [MB] (average 25 MBps)[2024-07-23 00:26:25.372643] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:10.851 [2024-07-23 00:26:25.372706] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:22:10.851 [2024-07-23 00:26:25.372724] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:22:10.851 [2024-07-23 00:26:25.372736] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:10.852 [2024-07-23 00:26:25.375537] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:22:10.852 [2024-07-23 00:26:25.376817] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:10.852 [2024-07-23 00:26:25.376856] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:22:10.852 [2024-07-23 00:26:25.376870] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.234 ms 00:22:10.852 [2024-07-23 00:26:25.376890] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:10.852 [2024-07-23 00:26:25.386157] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:10.852 [2024-07-23 00:26:25.386196] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:22:10.852 [2024-07-23 00:26:25.386220] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.807 ms 00:22:10.852 [2024-07-23 00:26:25.386231] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:10.852 [2024-07-23 00:26:25.409750] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:10.852 [2024-07-23 00:26:25.409793] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:22:10.852 [2024-07-23 00:26:25.409808] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.533 ms 00:22:10.852 [2024-07-23 00:26:25.409819] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:10.852 [2024-07-23 00:26:25.414883] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:10.852 [2024-07-23 00:26:25.414916] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:22:10.852 [2024-07-23 00:26:25.414929] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.027 ms 00:22:10.852 [2024-07-23 00:26:25.414940] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:10.852 [2024-07-23 00:26:25.416679] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:10.852 [2024-07-23 00:26:25.416711] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:22:10.852 [2024-07-23 00:26:25.416723] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.703 ms 00:22:10.852 [2024-07-23 00:26:25.416732] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:10.852 [2024-07-23 00:26:25.420167] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:10.852 [2024-07-23 00:26:25.420204] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:22:10.852 [2024-07-23 00:26:25.420216] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.413 ms 00:22:10.852 [2024-07-23 00:26:25.420226] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:10.852 [2024-07-23 00:26:25.521999] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:10.852 [2024-07-23 00:26:25.522041] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:22:10.852 [2024-07-23 00:26:25.522056] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 101.906 ms 00:22:10.852 [2024-07-23 00:26:25.522066] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:10.852 [2024-07-23 00:26:25.524203] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:10.852 [2024-07-23 00:26:25.524239] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:22:10.852 [2024-07-23 00:26:25.524250] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.123 ms 00:22:10.852 [2024-07-23 00:26:25.524274] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:10.852 [2024-07-23 00:26:25.525689] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:10.852 [2024-07-23 00:26:25.525722] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:22:10.852 [2024-07-23 00:26:25.525732] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.389 ms 00:22:10.852 [2024-07-23 00:26:25.525742] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:10.852 [2024-07-23 00:26:25.526820] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:10.852 [2024-07-23 00:26:25.526852] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:22:10.852 [2024-07-23 00:26:25.526863] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.054 ms 00:22:10.852 [2024-07-23 00:26:25.526872] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:10.852 [2024-07-23 00:26:25.527942] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:10.852 [2024-07-23 00:26:25.527975] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:22:10.852 [2024-07-23 00:26:25.527987] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.024 ms 00:22:10.852 [2024-07-23 00:26:25.527996] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:10.852 [2024-07-23 00:26:25.528019] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:22:10.852 [2024-07-23 00:26:25.528041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 103936 / 261120 wr_cnt: 1 state: open 00:22:10.852 [2024-07-23 00:26:25.528054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:22:10.852 [2024-07-23 00:26:25.528065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:22:10.852 [2024-07-23 00:26:25.528076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:22:10.852 [2024-07-23 00:26:25.528087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:22:10.852 [2024-07-23 00:26:25.528097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:22:10.852 [2024-07-23 00:26:25.528108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:22:10.852 [2024-07-23 00:26:25.528119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:22:10.852 [2024-07-23 00:26:25.528130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:22:10.852 [2024-07-23 00:26:25.528140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:22:10.852 [2024-07-23 00:26:25.528151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:22:10.852 [2024-07-23 00:26:25.528162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:22:10.852 [2024-07-23 00:26:25.528172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:22:10.852 [2024-07-23 00:26:25.528183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:22:10.852 [2024-07-23 00:26:25.528194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:22:10.852 [2024-07-23 00:26:25.528204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:22:10.852 [2024-07-23 00:26:25.528214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:22:10.852 [2024-07-23 00:26:25.528225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:22:10.852 [2024-07-23 00:26:25.528236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:22:10.852 [2024-07-23 00:26:25.528246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:22:10.852 [2024-07-23 00:26:25.528256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:22:10.852 [2024-07-23 00:26:25.528281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:22:10.852 [2024-07-23 00:26:25.528292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:22:10.852 [2024-07-23 00:26:25.528302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:22:10.852 [2024-07-23 00:26:25.528313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:22:10.852 [2024-07-23 00:26:25.528324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:22:10.852 [2024-07-23 00:26:25.528335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:22:10.852 [2024-07-23 00:26:25.528346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:22:10.852 [2024-07-23 00:26:25.528357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:22:10.852 [2024-07-23 00:26:25.528368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:22:10.852 [2024-07-23 00:26:25.528379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:22:10.852 [2024-07-23 00:26:25.528390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:22:10.852 [2024-07-23 00:26:25.528401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:22:10.852 [2024-07-23 00:26:25.528412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:22:10.852 [2024-07-23 00:26:25.528423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:22:10.852 [2024-07-23 00:26:25.528433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:22:10.852 [2024-07-23 00:26:25.528444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:22:10.852 [2024-07-23 00:26:25.528455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:22:10.852 [2024-07-23 00:26:25.528465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:22:10.852 [2024-07-23 00:26:25.528476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:22:10.852 [2024-07-23 00:26:25.528487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:22:10.852 [2024-07-23 00:26:25.528497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:22:10.852 [2024-07-23 00:26:25.528508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:22:10.852 [2024-07-23 00:26:25.528518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:22:10.852 [2024-07-23 00:26:25.528529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:22:10.852 [2024-07-23 00:26:25.528539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:22:10.852 [2024-07-23 00:26:25.528550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:22:10.852 [2024-07-23 00:26:25.528560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:22:10.852 [2024-07-23 00:26:25.528570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:22:10.852 [2024-07-23 00:26:25.528581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:22:10.852 [2024-07-23 00:26:25.528591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:22:10.853 [2024-07-23 00:26:25.528613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:22:10.853 [2024-07-23 00:26:25.528624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:22:10.853 [2024-07-23 00:26:25.528634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:22:10.853 [2024-07-23 00:26:25.528645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:22:10.853 [2024-07-23 00:26:25.528656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:22:10.853 [2024-07-23 00:26:25.528667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:22:10.853 [2024-07-23 00:26:25.528678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:22:10.853 [2024-07-23 00:26:25.528689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:22:10.853 [2024-07-23 00:26:25.528700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:22:10.853 [2024-07-23 00:26:25.528710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:22:10.853 [2024-07-23 00:26:25.528721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:22:10.853 [2024-07-23 00:26:25.528731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:22:10.853 [2024-07-23 00:26:25.528742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:22:10.853 [2024-07-23 00:26:25.528753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:22:10.853 [2024-07-23 00:26:25.528763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:22:10.853 [2024-07-23 00:26:25.528774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:22:10.853 [2024-07-23 00:26:25.528785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:22:10.853 [2024-07-23 00:26:25.528795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:22:10.853 [2024-07-23 00:26:25.528806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:22:10.853 [2024-07-23 00:26:25.528816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:22:10.853 [2024-07-23 00:26:25.528826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:22:10.853 [2024-07-23 00:26:25.528837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:22:10.853 [2024-07-23 00:26:25.528848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:22:10.853 [2024-07-23 00:26:25.528859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:22:10.853 [2024-07-23 00:26:25.528869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:22:10.853 [2024-07-23 00:26:25.528880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:22:10.853 [2024-07-23 00:26:25.528890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:22:10.853 [2024-07-23 00:26:25.528901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:22:10.853 [2024-07-23 00:26:25.528911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:22:10.853 [2024-07-23 00:26:25.528922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:22:10.853 [2024-07-23 00:26:25.528933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:22:10.853 [2024-07-23 00:26:25.528943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:22:10.853 [2024-07-23 00:26:25.528953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:22:10.853 [2024-07-23 00:26:25.528972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:22:10.853 [2024-07-23 00:26:25.528983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:22:10.853 [2024-07-23 00:26:25.528994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:22:10.853 [2024-07-23 00:26:25.529005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:22:10.853 [2024-07-23 00:26:25.529015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:22:10.853 [2024-07-23 00:26:25.529026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:22:10.853 [2024-07-23 00:26:25.529037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:22:10.853 [2024-07-23 00:26:25.529047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:22:10.853 [2024-07-23 00:26:25.529058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:22:10.853 [2024-07-23 00:26:25.529069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:22:10.853 [2024-07-23 00:26:25.529080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:22:10.853 [2024-07-23 00:26:25.529090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:22:10.853 [2024-07-23 00:26:25.529106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:22:10.853 [2024-07-23 00:26:25.529116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:22:10.853 [2024-07-23 00:26:25.529127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:22:10.853 [2024-07-23 00:26:25.529137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:22:10.853 [2024-07-23 00:26:25.529155] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:22:10.853 [2024-07-23 00:26:25.529165] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 13606ebe-bc72-4516-98d6-72c7480a66e3 00:22:10.853 [2024-07-23 00:26:25.529183] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 103936 00:22:10.853 [2024-07-23 00:26:25.529199] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 104896 00:22:10.853 [2024-07-23 00:26:25.529209] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 103936 00:22:10.853 [2024-07-23 00:26:25.529219] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0092 00:22:10.853 [2024-07-23 00:26:25.529228] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:22:10.853 [2024-07-23 00:26:25.529245] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:22:10.853 [2024-07-23 00:26:25.529255] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:22:10.853 [2024-07-23 00:26:25.529274] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:22:10.853 [2024-07-23 00:26:25.529283] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:22:10.853 [2024-07-23 00:26:25.529293] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:10.853 [2024-07-23 00:26:25.529302] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:22:10.853 [2024-07-23 00:26:25.529316] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.276 ms 00:22:10.853 [2024-07-23 00:26:25.529332] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:10.853 [2024-07-23 00:26:25.531250] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:10.853 [2024-07-23 00:26:25.531368] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:22:10.853 [2024-07-23 00:26:25.531452] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.901 ms 00:22:10.853 [2024-07-23 00:26:25.531488] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:10.853 [2024-07-23 00:26:25.531644] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:10.853 [2024-07-23 00:26:25.531680] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:22:10.853 [2024-07-23 00:26:25.531752] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.094 ms 00:22:10.853 [2024-07-23 00:26:25.531786] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.112 [2024-07-23 00:26:25.537789] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:11.112 [2024-07-23 00:26:25.537901] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:11.112 [2024-07-23 00:26:25.537970] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:11.112 [2024-07-23 00:26:25.538005] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.112 [2024-07-23 00:26:25.538088] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:11.112 [2024-07-23 00:26:25.538121] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:11.113 [2024-07-23 00:26:25.538150] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:11.113 [2024-07-23 00:26:25.538187] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.113 [2024-07-23 00:26:25.538250] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:11.113 [2024-07-23 00:26:25.538333] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:11.113 [2024-07-23 00:26:25.538409] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:11.113 [2024-07-23 00:26:25.538438] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.113 [2024-07-23 00:26:25.538478] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:11.113 [2024-07-23 00:26:25.538513] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:11.113 [2024-07-23 00:26:25.538542] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:11.113 [2024-07-23 00:26:25.538571] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.113 [2024-07-23 00:26:25.549997] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:11.113 [2024-07-23 00:26:25.550200] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:11.113 [2024-07-23 00:26:25.550393] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:11.113 [2024-07-23 00:26:25.550432] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.113 [2024-07-23 00:26:25.558742] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:11.113 [2024-07-23 00:26:25.558910] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:11.113 [2024-07-23 00:26:25.559005] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:11.113 [2024-07-23 00:26:25.559052] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.113 [2024-07-23 00:26:25.559130] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:11.113 [2024-07-23 00:26:25.559163] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:11.113 [2024-07-23 00:26:25.559193] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:11.113 [2024-07-23 00:26:25.559280] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.113 [2024-07-23 00:26:25.559340] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:11.113 [2024-07-23 00:26:25.559381] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:11.113 [2024-07-23 00:26:25.559416] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:11.113 [2024-07-23 00:26:25.559445] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.113 [2024-07-23 00:26:25.559546] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:11.113 [2024-07-23 00:26:25.559665] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:11.113 [2024-07-23 00:26:25.559696] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:11.113 [2024-07-23 00:26:25.559733] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.113 [2024-07-23 00:26:25.559794] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:11.113 [2024-07-23 00:26:25.559835] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:22:11.113 [2024-07-23 00:26:25.559911] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:11.113 [2024-07-23 00:26:25.559952] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.113 [2024-07-23 00:26:25.560088] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:11.113 [2024-07-23 00:26:25.560125] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:11.113 [2024-07-23 00:26:25.560155] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:11.113 [2024-07-23 00:26:25.560184] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.113 [2024-07-23 00:26:25.560247] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:11.113 [2024-07-23 00:26:25.560299] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:11.113 [2024-07-23 00:26:25.560333] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:11.113 [2024-07-23 00:26:25.560363] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.113 [2024-07-23 00:26:25.560510] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 190.174 ms, result 0 00:22:12.050 00:22:12.050 00:22:12.050 00:26:26 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@90 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:22:13.953 00:26:28 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@93 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --count=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:22:13.953 [2024-07-23 00:26:28.199703] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:22:13.953 [2024-07-23 00:26:28.199852] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92685 ] 00:22:13.953 [2024-07-23 00:26:28.350603] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:13.953 [2024-07-23 00:26:28.394708] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:13.953 [2024-07-23 00:26:28.497460] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:13.953 [2024-07-23 00:26:28.497526] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:14.213 [2024-07-23 00:26:28.649355] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:14.213 [2024-07-23 00:26:28.649410] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:22:14.213 [2024-07-23 00:26:28.649426] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:22:14.213 [2024-07-23 00:26:28.649436] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:14.213 [2024-07-23 00:26:28.649487] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:14.213 [2024-07-23 00:26:28.649500] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:14.213 [2024-07-23 00:26:28.649510] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:22:14.213 [2024-07-23 00:26:28.649523] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:14.213 [2024-07-23 00:26:28.649545] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:22:14.213 [2024-07-23 00:26:28.649765] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:22:14.213 [2024-07-23 00:26:28.649784] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:14.213 [2024-07-23 00:26:28.649799] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:14.213 [2024-07-23 00:26:28.649817] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.245 ms 00:22:14.213 [2024-07-23 00:26:28.649827] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:14.213 [2024-07-23 00:26:28.651231] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:22:14.213 [2024-07-23 00:26:28.653826] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:14.213 [2024-07-23 00:26:28.653869] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:22:14.213 [2024-07-23 00:26:28.653893] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.600 ms 00:22:14.213 [2024-07-23 00:26:28.653903] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:14.213 [2024-07-23 00:26:28.653966] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:14.213 [2024-07-23 00:26:28.653977] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:22:14.213 [2024-07-23 00:26:28.653995] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:22:14.214 [2024-07-23 00:26:28.654005] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:14.214 [2024-07-23 00:26:28.660811] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:14.214 [2024-07-23 00:26:28.660849] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:14.214 [2024-07-23 00:26:28.660869] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.761 ms 00:22:14.214 [2024-07-23 00:26:28.660879] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:14.214 [2024-07-23 00:26:28.660979] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:14.214 [2024-07-23 00:26:28.660992] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:14.214 [2024-07-23 00:26:28.661006] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.081 ms 00:22:14.214 [2024-07-23 00:26:28.661016] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:14.214 [2024-07-23 00:26:28.661078] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:14.214 [2024-07-23 00:26:28.661090] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:22:14.214 [2024-07-23 00:26:28.661115] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:22:14.214 [2024-07-23 00:26:28.661125] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:14.214 [2024-07-23 00:26:28.661150] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:22:14.214 [2024-07-23 00:26:28.662804] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:14.214 [2024-07-23 00:26:28.662831] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:14.214 [2024-07-23 00:26:28.662842] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.662 ms 00:22:14.214 [2024-07-23 00:26:28.662852] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:14.214 [2024-07-23 00:26:28.662893] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:14.214 [2024-07-23 00:26:28.662904] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:22:14.214 [2024-07-23 00:26:28.662920] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:22:14.214 [2024-07-23 00:26:28.662930] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:14.214 [2024-07-23 00:26:28.662953] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:22:14.214 [2024-07-23 00:26:28.662983] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:22:14.214 [2024-07-23 00:26:28.663022] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:22:14.214 [2024-07-23 00:26:28.663052] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:22:14.214 [2024-07-23 00:26:28.663137] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:22:14.214 [2024-07-23 00:26:28.663154] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:22:14.214 [2024-07-23 00:26:28.663166] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:22:14.214 [2024-07-23 00:26:28.663185] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:22:14.214 [2024-07-23 00:26:28.663197] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:22:14.214 [2024-07-23 00:26:28.663208] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:22:14.214 [2024-07-23 00:26:28.663218] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:22:14.214 [2024-07-23 00:26:28.663227] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:22:14.214 [2024-07-23 00:26:28.663237] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:22:14.214 [2024-07-23 00:26:28.663247] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:14.214 [2024-07-23 00:26:28.663269] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:22:14.214 [2024-07-23 00:26:28.663281] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.297 ms 00:22:14.214 [2024-07-23 00:26:28.663294] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:14.214 [2024-07-23 00:26:28.663362] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:14.214 [2024-07-23 00:26:28.663372] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:22:14.214 [2024-07-23 00:26:28.663382] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:22:14.214 [2024-07-23 00:26:28.663391] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:14.214 [2024-07-23 00:26:28.663493] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:22:14.214 [2024-07-23 00:26:28.663507] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:22:14.214 [2024-07-23 00:26:28.663518] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:14.214 [2024-07-23 00:26:28.663528] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:14.214 [2024-07-23 00:26:28.663542] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:22:14.214 [2024-07-23 00:26:28.663551] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:22:14.214 [2024-07-23 00:26:28.663560] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:22:14.214 [2024-07-23 00:26:28.663569] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:22:14.214 [2024-07-23 00:26:28.663578] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:22:14.214 [2024-07-23 00:26:28.663593] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:14.214 [2024-07-23 00:26:28.663603] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:22:14.214 [2024-07-23 00:26:28.663612] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:22:14.214 [2024-07-23 00:26:28.663621] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:14.214 [2024-07-23 00:26:28.663631] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:22:14.214 [2024-07-23 00:26:28.663640] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:22:14.214 [2024-07-23 00:26:28.663649] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:14.214 [2024-07-23 00:26:28.663658] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:22:14.214 [2024-07-23 00:26:28.663667] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:22:14.214 [2024-07-23 00:26:28.663676] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:14.214 [2024-07-23 00:26:28.663685] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:22:14.214 [2024-07-23 00:26:28.663694] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:22:14.214 [2024-07-23 00:26:28.663703] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:14.214 [2024-07-23 00:26:28.663712] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:22:14.214 [2024-07-23 00:26:28.663721] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:22:14.214 [2024-07-23 00:26:28.663729] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:14.214 [2024-07-23 00:26:28.663741] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:22:14.214 [2024-07-23 00:26:28.663750] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:22:14.214 [2024-07-23 00:26:28.663759] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:14.214 [2024-07-23 00:26:28.663767] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:22:14.214 [2024-07-23 00:26:28.663776] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:22:14.214 [2024-07-23 00:26:28.663785] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:14.214 [2024-07-23 00:26:28.663793] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:22:14.214 [2024-07-23 00:26:28.663802] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:22:14.214 [2024-07-23 00:26:28.663811] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:14.214 [2024-07-23 00:26:28.663819] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:22:14.214 [2024-07-23 00:26:28.663828] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:22:14.214 [2024-07-23 00:26:28.663836] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:14.214 [2024-07-23 00:26:28.663845] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:22:14.214 [2024-07-23 00:26:28.663854] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:22:14.214 [2024-07-23 00:26:28.663862] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:14.214 [2024-07-23 00:26:28.663871] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:22:14.214 [2024-07-23 00:26:28.663883] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:22:14.214 [2024-07-23 00:26:28.663892] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:14.214 [2024-07-23 00:26:28.663900] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:22:14.214 [2024-07-23 00:26:28.663910] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:22:14.214 [2024-07-23 00:26:28.663920] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:14.214 [2024-07-23 00:26:28.663936] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:14.214 [2024-07-23 00:26:28.663946] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:22:14.214 [2024-07-23 00:26:28.663956] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:22:14.214 [2024-07-23 00:26:28.663964] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:22:14.214 [2024-07-23 00:26:28.663973] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:22:14.214 [2024-07-23 00:26:28.663982] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:22:14.214 [2024-07-23 00:26:28.663991] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:22:14.214 [2024-07-23 00:26:28.664002] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:22:14.214 [2024-07-23 00:26:28.664014] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:14.214 [2024-07-23 00:26:28.664025] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:22:14.214 [2024-07-23 00:26:28.664035] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:22:14.214 [2024-07-23 00:26:28.664049] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:22:14.214 [2024-07-23 00:26:28.664059] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:22:14.215 [2024-07-23 00:26:28.664069] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:22:14.215 [2024-07-23 00:26:28.664079] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:22:14.215 [2024-07-23 00:26:28.664090] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:22:14.215 [2024-07-23 00:26:28.664100] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:22:14.215 [2024-07-23 00:26:28.664109] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:22:14.215 [2024-07-23 00:26:28.664120] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:22:14.215 [2024-07-23 00:26:28.664130] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:22:14.215 [2024-07-23 00:26:28.664140] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:22:14.215 [2024-07-23 00:26:28.664149] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:22:14.215 [2024-07-23 00:26:28.664160] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:22:14.215 [2024-07-23 00:26:28.664169] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:22:14.215 [2024-07-23 00:26:28.664180] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:14.215 [2024-07-23 00:26:28.664191] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:22:14.215 [2024-07-23 00:26:28.664202] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:22:14.215 [2024-07-23 00:26:28.664224] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:22:14.215 [2024-07-23 00:26:28.664235] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:22:14.215 [2024-07-23 00:26:28.664245] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:14.215 [2024-07-23 00:26:28.664256] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:22:14.215 [2024-07-23 00:26:28.664278] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.804 ms 00:22:14.215 [2024-07-23 00:26:28.664292] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:14.215 [2024-07-23 00:26:28.684858] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:14.215 [2024-07-23 00:26:28.684901] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:14.215 [2024-07-23 00:26:28.684919] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.545 ms 00:22:14.215 [2024-07-23 00:26:28.684932] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:14.215 [2024-07-23 00:26:28.685044] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:14.215 [2024-07-23 00:26:28.685059] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:22:14.215 [2024-07-23 00:26:28.685073] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:22:14.215 [2024-07-23 00:26:28.685086] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:14.215 [2024-07-23 00:26:28.695743] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:14.215 [2024-07-23 00:26:28.695792] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:14.215 [2024-07-23 00:26:28.695806] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.596 ms 00:22:14.215 [2024-07-23 00:26:28.695816] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:14.215 [2024-07-23 00:26:28.695854] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:14.215 [2024-07-23 00:26:28.695866] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:14.215 [2024-07-23 00:26:28.695876] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:22:14.215 [2024-07-23 00:26:28.695893] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:14.215 [2024-07-23 00:26:28.696387] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:14.215 [2024-07-23 00:26:28.696410] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:14.215 [2024-07-23 00:26:28.696421] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.442 ms 00:22:14.215 [2024-07-23 00:26:28.696431] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:14.215 [2024-07-23 00:26:28.696551] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:14.215 [2024-07-23 00:26:28.696568] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:14.215 [2024-07-23 00:26:28.696578] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.094 ms 00:22:14.215 [2024-07-23 00:26:28.696595] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:14.215 [2024-07-23 00:26:28.702626] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:14.215 [2024-07-23 00:26:28.702658] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:14.215 [2024-07-23 00:26:28.702671] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.015 ms 00:22:14.215 [2024-07-23 00:26:28.702690] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:14.215 [2024-07-23 00:26:28.705253] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:22:14.215 [2024-07-23 00:26:28.705294] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:22:14.215 [2024-07-23 00:26:28.705313] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:14.215 [2024-07-23 00:26:28.705324] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:22:14.215 [2024-07-23 00:26:28.705334] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.528 ms 00:22:14.215 [2024-07-23 00:26:28.705343] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:14.215 [2024-07-23 00:26:28.717990] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:14.215 [2024-07-23 00:26:28.718029] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:22:14.215 [2024-07-23 00:26:28.718043] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.627 ms 00:22:14.215 [2024-07-23 00:26:28.718053] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:14.215 [2024-07-23 00:26:28.719753] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:14.215 [2024-07-23 00:26:28.719783] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:22:14.215 [2024-07-23 00:26:28.719795] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.659 ms 00:22:14.215 [2024-07-23 00:26:28.719805] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:14.215 [2024-07-23 00:26:28.721250] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:14.215 [2024-07-23 00:26:28.721297] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:22:14.215 [2024-07-23 00:26:28.721309] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.412 ms 00:22:14.215 [2024-07-23 00:26:28.721323] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:14.215 [2024-07-23 00:26:28.721594] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:14.215 [2024-07-23 00:26:28.721609] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:22:14.215 [2024-07-23 00:26:28.721620] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.208 ms 00:22:14.215 [2024-07-23 00:26:28.721629] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:14.215 [2024-07-23 00:26:28.742020] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:14.215 [2024-07-23 00:26:28.742084] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:22:14.215 [2024-07-23 00:26:28.742101] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.399 ms 00:22:14.215 [2024-07-23 00:26:28.742111] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:14.215 [2024-07-23 00:26:28.748272] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:22:14.215 [2024-07-23 00:26:28.750862] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:14.215 [2024-07-23 00:26:28.750892] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:22:14.215 [2024-07-23 00:26:28.750908] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.722 ms 00:22:14.215 [2024-07-23 00:26:28.750918] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:14.215 [2024-07-23 00:26:28.750972] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:14.215 [2024-07-23 00:26:28.750991] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:22:14.215 [2024-07-23 00:26:28.751003] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:22:14.215 [2024-07-23 00:26:28.751020] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:14.215 [2024-07-23 00:26:28.752662] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:14.215 [2024-07-23 00:26:28.752701] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:22:14.215 [2024-07-23 00:26:28.752717] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.595 ms 00:22:14.215 [2024-07-23 00:26:28.752736] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:14.215 [2024-07-23 00:26:28.752769] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:14.215 [2024-07-23 00:26:28.752779] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:22:14.215 [2024-07-23 00:26:28.752790] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:22:14.215 [2024-07-23 00:26:28.752800] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:14.215 [2024-07-23 00:26:28.752834] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:22:14.215 [2024-07-23 00:26:28.752846] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:14.215 [2024-07-23 00:26:28.752855] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:22:14.215 [2024-07-23 00:26:28.752876] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:22:14.215 [2024-07-23 00:26:28.752900] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:14.215 [2024-07-23 00:26:28.756536] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:14.215 [2024-07-23 00:26:28.756571] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:22:14.215 [2024-07-23 00:26:28.756584] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.622 ms 00:22:14.215 [2024-07-23 00:26:28.756594] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:14.215 [2024-07-23 00:26:28.756659] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:14.215 [2024-07-23 00:26:28.756671] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:22:14.216 [2024-07-23 00:26:28.756682] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:22:14.216 [2024-07-23 00:26:28.756698] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:14.216 [2024-07-23 00:26:28.761859] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 111.397 ms, result 0 00:22:45.774  Copying: 1188/1048576 [kB] (1188 kBps) Copying: 8932/1048576 [kB] (7744 kBps) Copying: 43/1024 [MB] (34 MBps) Copying: 78/1024 [MB] (35 MBps) Copying: 113/1024 [MB] (35 MBps) Copying: 148/1024 [MB] (34 MBps) Copying: 182/1024 [MB] (34 MBps) Copying: 218/1024 [MB] (35 MBps) Copying: 252/1024 [MB] (34 MBps) Copying: 287/1024 [MB] (34 MBps) Copying: 322/1024 [MB] (34 MBps) Copying: 357/1024 [MB] (34 MBps) Copying: 393/1024 [MB] (36 MBps) Copying: 429/1024 [MB] (35 MBps) Copying: 464/1024 [MB] (34 MBps) Copying: 498/1024 [MB] (34 MBps) Copying: 532/1024 [MB] (33 MBps) Copying: 566/1024 [MB] (33 MBps) Copying: 600/1024 [MB] (34 MBps) Copying: 635/1024 [MB] (34 MBps) Copying: 669/1024 [MB] (34 MBps) Copying: 704/1024 [MB] (34 MBps) Copying: 737/1024 [MB] (33 MBps) Copying: 771/1024 [MB] (33 MBps) Copying: 806/1024 [MB] (34 MBps) Copying: 841/1024 [MB] (35 MBps) Copying: 877/1024 [MB] (35 MBps) Copying: 912/1024 [MB] (35 MBps) Copying: 948/1024 [MB] (35 MBps) Copying: 984/1024 [MB] (36 MBps) Copying: 1018/1024 [MB] (34 MBps) Copying: 1024/1024 [MB] (average 32 MBps)[2024-07-23 00:27:00.313217] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:45.774 [2024-07-23 00:27:00.313302] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:22:45.774 [2024-07-23 00:27:00.313323] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:22:45.774 [2024-07-23 00:27:00.313337] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:45.774 [2024-07-23 00:27:00.313365] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:22:45.774 [2024-07-23 00:27:00.314167] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:45.774 [2024-07-23 00:27:00.314183] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:22:45.774 [2024-07-23 00:27:00.314198] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.783 ms 00:22:45.774 [2024-07-23 00:27:00.314210] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:45.774 [2024-07-23 00:27:00.314746] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:45.774 [2024-07-23 00:27:00.314882] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:22:45.775 [2024-07-23 00:27:00.314982] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.511 ms 00:22:45.775 [2024-07-23 00:27:00.315090] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:45.775 [2024-07-23 00:27:00.327229] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:45.775 [2024-07-23 00:27:00.327387] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:22:45.775 [2024-07-23 00:27:00.327490] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.096 ms 00:22:45.775 [2024-07-23 00:27:00.327529] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:45.775 [2024-07-23 00:27:00.332653] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:45.775 [2024-07-23 00:27:00.332794] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:22:45.775 [2024-07-23 00:27:00.332908] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.024 ms 00:22:45.775 [2024-07-23 00:27:00.332992] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:45.775 [2024-07-23 00:27:00.334430] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:45.775 [2024-07-23 00:27:00.334463] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:22:45.775 [2024-07-23 00:27:00.334475] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.367 ms 00:22:45.775 [2024-07-23 00:27:00.334485] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:45.775 [2024-07-23 00:27:00.338128] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:45.775 [2024-07-23 00:27:00.338170] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:22:45.775 [2024-07-23 00:27:00.338183] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.621 ms 00:22:45.775 [2024-07-23 00:27:00.338193] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:45.775 [2024-07-23 00:27:00.342427] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:45.775 [2024-07-23 00:27:00.342472] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:22:45.775 [2024-07-23 00:27:00.342484] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.210 ms 00:22:45.775 [2024-07-23 00:27:00.342494] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:45.775 [2024-07-23 00:27:00.344375] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:45.775 [2024-07-23 00:27:00.344406] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:22:45.775 [2024-07-23 00:27:00.344417] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.867 ms 00:22:45.775 [2024-07-23 00:27:00.344426] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:45.775 [2024-07-23 00:27:00.345916] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:45.775 [2024-07-23 00:27:00.345950] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:22:45.775 [2024-07-23 00:27:00.345961] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.465 ms 00:22:45.775 [2024-07-23 00:27:00.345970] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:45.775 [2024-07-23 00:27:00.347138] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:45.775 [2024-07-23 00:27:00.347172] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:22:45.775 [2024-07-23 00:27:00.347183] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.140 ms 00:22:45.775 [2024-07-23 00:27:00.347192] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:45.775 [2024-07-23 00:27:00.348319] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:45.775 [2024-07-23 00:27:00.348351] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:22:45.775 [2024-07-23 00:27:00.348362] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.080 ms 00:22:45.775 [2024-07-23 00:27:00.348372] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:45.775 [2024-07-23 00:27:00.348397] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:22:45.775 [2024-07-23 00:27:00.348413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:22:45.775 [2024-07-23 00:27:00.348426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 3840 / 261120 wr_cnt: 1 state: open 00:22:45.775 [2024-07-23 00:27:00.348438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:22:45.775 [2024-07-23 00:27:00.348449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:22:45.775 [2024-07-23 00:27:00.348459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:22:45.775 [2024-07-23 00:27:00.348470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:22:45.775 [2024-07-23 00:27:00.348480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:22:45.775 [2024-07-23 00:27:00.348490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:22:45.775 [2024-07-23 00:27:00.348501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:22:45.775 [2024-07-23 00:27:00.348511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:22:45.775 [2024-07-23 00:27:00.348522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:22:45.775 [2024-07-23 00:27:00.348532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:22:45.775 [2024-07-23 00:27:00.348543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:22:45.775 [2024-07-23 00:27:00.348553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:22:45.775 [2024-07-23 00:27:00.348563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:22:45.775 [2024-07-23 00:27:00.348574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:22:45.775 [2024-07-23 00:27:00.348584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:22:45.775 [2024-07-23 00:27:00.348594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:22:45.775 [2024-07-23 00:27:00.348604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:22:45.775 [2024-07-23 00:27:00.348615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:22:45.775 [2024-07-23 00:27:00.348625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:22:45.775 [2024-07-23 00:27:00.348635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:22:45.775 [2024-07-23 00:27:00.348645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:22:45.775 [2024-07-23 00:27:00.348655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:22:45.775 [2024-07-23 00:27:00.348666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:22:45.775 [2024-07-23 00:27:00.348676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:22:45.775 [2024-07-23 00:27:00.348687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:22:45.775 [2024-07-23 00:27:00.348698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:22:45.775 [2024-07-23 00:27:00.348708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:22:45.775 [2024-07-23 00:27:00.348719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:22:45.775 [2024-07-23 00:27:00.348730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:22:45.775 [2024-07-23 00:27:00.348741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:22:45.775 [2024-07-23 00:27:00.348752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:22:45.775 [2024-07-23 00:27:00.348763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:22:45.775 [2024-07-23 00:27:00.348773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:22:45.775 [2024-07-23 00:27:00.348783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:22:45.775 [2024-07-23 00:27:00.348794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:22:45.775 [2024-07-23 00:27:00.348804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:22:45.775 [2024-07-23 00:27:00.348814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:22:45.775 [2024-07-23 00:27:00.348825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:22:45.775 [2024-07-23 00:27:00.348835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:22:45.775 [2024-07-23 00:27:00.348846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:22:45.775 [2024-07-23 00:27:00.348856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:22:45.775 [2024-07-23 00:27:00.348866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:22:45.775 [2024-07-23 00:27:00.348876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:22:45.775 [2024-07-23 00:27:00.348887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:22:45.775 [2024-07-23 00:27:00.348897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:22:45.775 [2024-07-23 00:27:00.348907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:22:45.775 [2024-07-23 00:27:00.348918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:22:45.775 [2024-07-23 00:27:00.348928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:22:45.775 [2024-07-23 00:27:00.348938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:22:45.775 [2024-07-23 00:27:00.348949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:22:45.775 [2024-07-23 00:27:00.348959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:22:45.775 [2024-07-23 00:27:00.348978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:22:45.775 [2024-07-23 00:27:00.348989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:22:45.775 [2024-07-23 00:27:00.349000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:22:45.775 [2024-07-23 00:27:00.349023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:22:45.776 [2024-07-23 00:27:00.349034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:22:45.776 [2024-07-23 00:27:00.349045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:22:45.776 [2024-07-23 00:27:00.349055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:22:45.776 [2024-07-23 00:27:00.349066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:22:45.776 [2024-07-23 00:27:00.349077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:22:45.776 [2024-07-23 00:27:00.349087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:22:45.776 [2024-07-23 00:27:00.349097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:22:45.776 [2024-07-23 00:27:00.349108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:22:45.776 [2024-07-23 00:27:00.349119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:22:45.776 [2024-07-23 00:27:00.349129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:22:45.776 [2024-07-23 00:27:00.349139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:22:45.776 [2024-07-23 00:27:00.349150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:22:45.776 [2024-07-23 00:27:00.349160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:22:45.776 [2024-07-23 00:27:00.349170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:22:45.776 [2024-07-23 00:27:00.349181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:22:45.776 [2024-07-23 00:27:00.349191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:22:45.776 [2024-07-23 00:27:00.349202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:22:45.776 [2024-07-23 00:27:00.349212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:22:45.776 [2024-07-23 00:27:00.349222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:22:45.776 [2024-07-23 00:27:00.349232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:22:45.776 [2024-07-23 00:27:00.349243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:22:45.776 [2024-07-23 00:27:00.349253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:22:45.776 [2024-07-23 00:27:00.349278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:22:45.776 [2024-07-23 00:27:00.349289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:22:45.776 [2024-07-23 00:27:00.349300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:22:45.776 [2024-07-23 00:27:00.349310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:22:45.776 [2024-07-23 00:27:00.349321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:22:45.776 [2024-07-23 00:27:00.349332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:22:45.776 [2024-07-23 00:27:00.349343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:22:45.776 [2024-07-23 00:27:00.349353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:22:45.776 [2024-07-23 00:27:00.349364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:22:45.776 [2024-07-23 00:27:00.349375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:22:45.776 [2024-07-23 00:27:00.349386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:22:45.776 [2024-07-23 00:27:00.349396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:22:45.776 [2024-07-23 00:27:00.349407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:22:45.776 [2024-07-23 00:27:00.349418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:22:45.776 [2024-07-23 00:27:00.349428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:22:45.776 [2024-07-23 00:27:00.349439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:22:45.776 [2024-07-23 00:27:00.349455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:22:45.776 [2024-07-23 00:27:00.349466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:22:45.776 [2024-07-23 00:27:00.349477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:22:45.776 [2024-07-23 00:27:00.349487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:22:45.776 [2024-07-23 00:27:00.349498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:22:45.776 [2024-07-23 00:27:00.349515] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:22:45.776 [2024-07-23 00:27:00.349528] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 13606ebe-bc72-4516-98d6-72c7480a66e3 00:22:45.776 [2024-07-23 00:27:00.349539] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 264960 00:22:45.776 [2024-07-23 00:27:00.349548] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 163008 00:22:45.776 [2024-07-23 00:27:00.349565] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 161024 00:22:45.776 [2024-07-23 00:27:00.349576] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0123 00:22:45.776 [2024-07-23 00:27:00.349585] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:22:45.776 [2024-07-23 00:27:00.349596] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:22:45.776 [2024-07-23 00:27:00.349605] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:22:45.776 [2024-07-23 00:27:00.349614] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:22:45.776 [2024-07-23 00:27:00.349623] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:22:45.776 [2024-07-23 00:27:00.349632] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:45.776 [2024-07-23 00:27:00.349642] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:22:45.776 [2024-07-23 00:27:00.349652] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.238 ms 00:22:45.776 [2024-07-23 00:27:00.349661] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:45.776 [2024-07-23 00:27:00.351335] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:45.776 [2024-07-23 00:27:00.351355] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:22:45.776 [2024-07-23 00:27:00.351366] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.657 ms 00:22:45.776 [2024-07-23 00:27:00.351376] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:45.776 [2024-07-23 00:27:00.351479] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:45.776 [2024-07-23 00:27:00.351497] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:22:45.776 [2024-07-23 00:27:00.351508] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.085 ms 00:22:45.776 [2024-07-23 00:27:00.351517] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:45.776 [2024-07-23 00:27:00.357449] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:45.776 [2024-07-23 00:27:00.357471] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:45.776 [2024-07-23 00:27:00.357483] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:45.776 [2024-07-23 00:27:00.357493] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:45.776 [2024-07-23 00:27:00.357538] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:45.776 [2024-07-23 00:27:00.357553] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:45.776 [2024-07-23 00:27:00.357563] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:45.776 [2024-07-23 00:27:00.357572] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:45.776 [2024-07-23 00:27:00.357630] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:45.776 [2024-07-23 00:27:00.357650] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:45.776 [2024-07-23 00:27:00.357660] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:45.776 [2024-07-23 00:27:00.357670] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:45.776 [2024-07-23 00:27:00.357687] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:45.776 [2024-07-23 00:27:00.357697] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:45.776 [2024-07-23 00:27:00.357710] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:45.776 [2024-07-23 00:27:00.357720] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:45.776 [2024-07-23 00:27:00.369724] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:45.776 [2024-07-23 00:27:00.369769] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:45.776 [2024-07-23 00:27:00.369782] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:45.776 [2024-07-23 00:27:00.369792] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:45.776 [2024-07-23 00:27:00.377863] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:45.776 [2024-07-23 00:27:00.377895] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:45.776 [2024-07-23 00:27:00.377913] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:45.776 [2024-07-23 00:27:00.377923] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:45.776 [2024-07-23 00:27:00.377972] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:45.776 [2024-07-23 00:27:00.377984] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:45.776 [2024-07-23 00:27:00.377994] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:45.776 [2024-07-23 00:27:00.378003] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:45.776 [2024-07-23 00:27:00.378029] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:45.776 [2024-07-23 00:27:00.378039] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:45.776 [2024-07-23 00:27:00.378049] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:45.777 [2024-07-23 00:27:00.378058] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:45.777 [2024-07-23 00:27:00.378139] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:45.777 [2024-07-23 00:27:00.378152] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:45.777 [2024-07-23 00:27:00.378163] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:45.777 [2024-07-23 00:27:00.378179] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:45.777 [2024-07-23 00:27:00.378213] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:45.777 [2024-07-23 00:27:00.378225] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:22:45.777 [2024-07-23 00:27:00.378235] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:45.777 [2024-07-23 00:27:00.378244] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:45.777 [2024-07-23 00:27:00.378297] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:45.777 [2024-07-23 00:27:00.378309] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:45.777 [2024-07-23 00:27:00.378319] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:45.777 [2024-07-23 00:27:00.378328] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:45.777 [2024-07-23 00:27:00.378377] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:45.777 [2024-07-23 00:27:00.378388] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:45.777 [2024-07-23 00:27:00.378405] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:45.777 [2024-07-23 00:27:00.378421] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:45.777 [2024-07-23 00:27:00.378547] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 65.406 ms, result 0 00:22:46.035 00:22:46.035 00:22:46.035 00:27:00 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@94 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:22:47.939 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:22:47.939 00:27:02 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@95 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --count=262144 --skip=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:22:47.939 [2024-07-23 00:27:02.410236] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:22:47.939 [2024-07-23 00:27:02.410386] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93032 ] 00:22:47.939 [2024-07-23 00:27:02.561300] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:47.939 [2024-07-23 00:27:02.605089] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:48.198 [2024-07-23 00:27:02.706656] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:48.198 [2024-07-23 00:27:02.706725] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:48.198 [2024-07-23 00:27:02.858729] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:48.198 [2024-07-23 00:27:02.858786] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:22:48.198 [2024-07-23 00:27:02.858811] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:22:48.198 [2024-07-23 00:27:02.858821] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:48.198 [2024-07-23 00:27:02.858869] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:48.198 [2024-07-23 00:27:02.858881] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:48.199 [2024-07-23 00:27:02.858891] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:22:48.199 [2024-07-23 00:27:02.858904] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:48.199 [2024-07-23 00:27:02.858932] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:22:48.199 [2024-07-23 00:27:02.859142] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:22:48.199 [2024-07-23 00:27:02.859160] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:48.199 [2024-07-23 00:27:02.859176] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:48.199 [2024-07-23 00:27:02.859187] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.240 ms 00:22:48.199 [2024-07-23 00:27:02.859197] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:48.199 [2024-07-23 00:27:02.860580] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:22:48.199 [2024-07-23 00:27:02.863038] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:48.199 [2024-07-23 00:27:02.863074] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:22:48.199 [2024-07-23 00:27:02.863091] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.463 ms 00:22:48.199 [2024-07-23 00:27:02.863102] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:48.199 [2024-07-23 00:27:02.863157] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:48.199 [2024-07-23 00:27:02.863169] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:22:48.199 [2024-07-23 00:27:02.863183] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:22:48.199 [2024-07-23 00:27:02.863200] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:48.199 [2024-07-23 00:27:02.869813] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:48.199 [2024-07-23 00:27:02.869845] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:48.199 [2024-07-23 00:27:02.869857] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.569 ms 00:22:48.199 [2024-07-23 00:27:02.869868] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:48.199 [2024-07-23 00:27:02.869959] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:48.199 [2024-07-23 00:27:02.869972] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:48.199 [2024-07-23 00:27:02.869983] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.073 ms 00:22:48.199 [2024-07-23 00:27:02.869993] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:48.199 [2024-07-23 00:27:02.870063] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:48.199 [2024-07-23 00:27:02.870075] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:22:48.199 [2024-07-23 00:27:02.870092] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:22:48.199 [2024-07-23 00:27:02.870108] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:48.199 [2024-07-23 00:27:02.870133] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:22:48.199 [2024-07-23 00:27:02.871755] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:48.199 [2024-07-23 00:27:02.871782] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:48.199 [2024-07-23 00:27:02.871793] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.631 ms 00:22:48.199 [2024-07-23 00:27:02.871804] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:48.199 [2024-07-23 00:27:02.871836] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:48.199 [2024-07-23 00:27:02.871846] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:22:48.199 [2024-07-23 00:27:02.871868] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:22:48.199 [2024-07-23 00:27:02.871884] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:48.199 [2024-07-23 00:27:02.871906] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:22:48.199 [2024-07-23 00:27:02.871931] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:22:48.199 [2024-07-23 00:27:02.871972] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:22:48.199 [2024-07-23 00:27:02.871991] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:22:48.199 [2024-07-23 00:27:02.872085] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:22:48.199 [2024-07-23 00:27:02.872108] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:22:48.199 [2024-07-23 00:27:02.872121] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:22:48.199 [2024-07-23 00:27:02.872134] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:22:48.199 [2024-07-23 00:27:02.872145] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:22:48.199 [2024-07-23 00:27:02.872157] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:22:48.199 [2024-07-23 00:27:02.872167] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:22:48.199 [2024-07-23 00:27:02.872177] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:22:48.199 [2024-07-23 00:27:02.872186] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:22:48.199 [2024-07-23 00:27:02.872204] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:48.199 [2024-07-23 00:27:02.872213] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:22:48.199 [2024-07-23 00:27:02.872224] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.300 ms 00:22:48.199 [2024-07-23 00:27:02.872237] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:48.199 [2024-07-23 00:27:02.872323] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:48.199 [2024-07-23 00:27:02.872335] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:22:48.199 [2024-07-23 00:27:02.872346] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:22:48.199 [2024-07-23 00:27:02.872363] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:48.199 [2024-07-23 00:27:02.872451] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:22:48.199 [2024-07-23 00:27:02.872471] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:22:48.199 [2024-07-23 00:27:02.872481] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:48.199 [2024-07-23 00:27:02.872491] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:48.199 [2024-07-23 00:27:02.872504] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:22:48.199 [2024-07-23 00:27:02.872513] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:22:48.199 [2024-07-23 00:27:02.872523] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:22:48.199 [2024-07-23 00:27:02.872532] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:22:48.199 [2024-07-23 00:27:02.872542] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:22:48.199 [2024-07-23 00:27:02.872552] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:48.199 [2024-07-23 00:27:02.872562] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:22:48.199 [2024-07-23 00:27:02.872571] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:22:48.199 [2024-07-23 00:27:02.872584] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:48.199 [2024-07-23 00:27:02.872593] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:22:48.199 [2024-07-23 00:27:02.872602] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:22:48.199 [2024-07-23 00:27:02.872612] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:48.199 [2024-07-23 00:27:02.872621] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:22:48.199 [2024-07-23 00:27:02.872630] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:22:48.199 [2024-07-23 00:27:02.872639] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:48.199 [2024-07-23 00:27:02.872648] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:22:48.199 [2024-07-23 00:27:02.872658] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:22:48.199 [2024-07-23 00:27:02.872667] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:48.199 [2024-07-23 00:27:02.872676] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:22:48.199 [2024-07-23 00:27:02.872685] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:22:48.199 [2024-07-23 00:27:02.872694] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:48.199 [2024-07-23 00:27:02.872704] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:22:48.199 [2024-07-23 00:27:02.872713] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:22:48.199 [2024-07-23 00:27:02.872722] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:48.199 [2024-07-23 00:27:02.872737] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:22:48.199 [2024-07-23 00:27:02.872746] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:22:48.199 [2024-07-23 00:27:02.872755] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:48.199 [2024-07-23 00:27:02.872764] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:22:48.199 [2024-07-23 00:27:02.872773] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:22:48.199 [2024-07-23 00:27:02.872782] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:48.199 [2024-07-23 00:27:02.872791] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:22:48.199 [2024-07-23 00:27:02.872800] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:22:48.199 [2024-07-23 00:27:02.872809] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:48.199 [2024-07-23 00:27:02.872818] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:22:48.199 [2024-07-23 00:27:02.872827] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:22:48.199 [2024-07-23 00:27:02.872836] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:48.199 [2024-07-23 00:27:02.872845] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:22:48.199 [2024-07-23 00:27:02.872854] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:22:48.199 [2024-07-23 00:27:02.872863] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:48.199 [2024-07-23 00:27:02.872871] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:22:48.199 [2024-07-23 00:27:02.872885] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:22:48.200 [2024-07-23 00:27:02.872895] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:48.200 [2024-07-23 00:27:02.872904] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:48.200 [2024-07-23 00:27:02.872914] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:22:48.200 [2024-07-23 00:27:02.872923] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:22:48.200 [2024-07-23 00:27:02.872932] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:22:48.200 [2024-07-23 00:27:02.872942] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:22:48.200 [2024-07-23 00:27:02.872950] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:22:48.200 [2024-07-23 00:27:02.872960] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:22:48.200 [2024-07-23 00:27:02.872978] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:22:48.200 [2024-07-23 00:27:02.872989] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:48.200 [2024-07-23 00:27:02.873001] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:22:48.200 [2024-07-23 00:27:02.873011] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:22:48.200 [2024-07-23 00:27:02.873022] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:22:48.200 [2024-07-23 00:27:02.873032] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:22:48.200 [2024-07-23 00:27:02.873042] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:22:48.200 [2024-07-23 00:27:02.873055] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:22:48.200 [2024-07-23 00:27:02.873065] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:22:48.200 [2024-07-23 00:27:02.873076] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:22:48.200 [2024-07-23 00:27:02.873086] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:22:48.200 [2024-07-23 00:27:02.873096] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:22:48.200 [2024-07-23 00:27:02.873106] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:22:48.200 [2024-07-23 00:27:02.873116] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:22:48.200 [2024-07-23 00:27:02.873126] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:22:48.200 [2024-07-23 00:27:02.873137] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:22:48.200 [2024-07-23 00:27:02.873146] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:22:48.200 [2024-07-23 00:27:02.873164] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:48.200 [2024-07-23 00:27:02.873182] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:22:48.200 [2024-07-23 00:27:02.873192] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:22:48.200 [2024-07-23 00:27:02.873211] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:22:48.200 [2024-07-23 00:27:02.873222] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:22:48.200 [2024-07-23 00:27:02.873233] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:48.200 [2024-07-23 00:27:02.873246] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:22:48.200 [2024-07-23 00:27:02.873257] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.833 ms 00:22:48.200 [2024-07-23 00:27:02.873289] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:48.460 [2024-07-23 00:27:02.895816] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:48.460 [2024-07-23 00:27:02.895863] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:48.460 [2024-07-23 00:27:02.895882] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.510 ms 00:22:48.460 [2024-07-23 00:27:02.895902] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:48.460 [2024-07-23 00:27:02.896006] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:48.460 [2024-07-23 00:27:02.896020] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:22:48.460 [2024-07-23 00:27:02.896034] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:22:48.460 [2024-07-23 00:27:02.896047] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:48.460 [2024-07-23 00:27:02.906640] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:48.460 [2024-07-23 00:27:02.906679] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:48.460 [2024-07-23 00:27:02.906695] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.528 ms 00:22:48.460 [2024-07-23 00:27:02.906715] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:48.460 [2024-07-23 00:27:02.906763] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:48.460 [2024-07-23 00:27:02.906775] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:48.460 [2024-07-23 00:27:02.906786] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:22:48.460 [2024-07-23 00:27:02.906814] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:48.460 [2024-07-23 00:27:02.907321] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:48.460 [2024-07-23 00:27:02.907337] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:48.460 [2024-07-23 00:27:02.907349] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.443 ms 00:22:48.460 [2024-07-23 00:27:02.907367] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:48.460 [2024-07-23 00:27:02.907493] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:48.460 [2024-07-23 00:27:02.907506] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:48.460 [2024-07-23 00:27:02.907517] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.103 ms 00:22:48.460 [2024-07-23 00:27:02.907527] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:48.460 [2024-07-23 00:27:02.913479] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:48.460 [2024-07-23 00:27:02.913513] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:48.460 [2024-07-23 00:27:02.913526] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.936 ms 00:22:48.460 [2024-07-23 00:27:02.913536] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:48.460 [2024-07-23 00:27:02.916118] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:22:48.460 [2024-07-23 00:27:02.916158] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:22:48.460 [2024-07-23 00:27:02.916176] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:48.460 [2024-07-23 00:27:02.916190] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:22:48.460 [2024-07-23 00:27:02.916201] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.537 ms 00:22:48.460 [2024-07-23 00:27:02.916211] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:48.460 [2024-07-23 00:27:02.928858] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:48.460 [2024-07-23 00:27:02.928894] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:22:48.460 [2024-07-23 00:27:02.928914] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.629 ms 00:22:48.460 [2024-07-23 00:27:02.928947] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:48.460 [2024-07-23 00:27:02.930816] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:48.460 [2024-07-23 00:27:02.930847] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:22:48.460 [2024-07-23 00:27:02.930859] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.781 ms 00:22:48.460 [2024-07-23 00:27:02.930869] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:48.460 [2024-07-23 00:27:02.932322] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:48.460 [2024-07-23 00:27:02.932352] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:22:48.460 [2024-07-23 00:27:02.932364] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.421 ms 00:22:48.460 [2024-07-23 00:27:02.932373] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:48.460 [2024-07-23 00:27:02.932656] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:48.460 [2024-07-23 00:27:02.932673] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:22:48.460 [2024-07-23 00:27:02.932692] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.218 ms 00:22:48.460 [2024-07-23 00:27:02.932702] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:48.460 [2024-07-23 00:27:02.952721] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:48.460 [2024-07-23 00:27:02.952789] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:22:48.460 [2024-07-23 00:27:02.952818] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.027 ms 00:22:48.460 [2024-07-23 00:27:02.952829] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:48.460 [2024-07-23 00:27:02.959059] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:22:48.460 [2024-07-23 00:27:02.961614] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:48.460 [2024-07-23 00:27:02.961645] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:22:48.460 [2024-07-23 00:27:02.961659] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.753 ms 00:22:48.460 [2024-07-23 00:27:02.961668] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:48.460 [2024-07-23 00:27:02.961721] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:48.460 [2024-07-23 00:27:02.961745] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:22:48.460 [2024-07-23 00:27:02.961756] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:22:48.460 [2024-07-23 00:27:02.961766] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:48.460 [2024-07-23 00:27:02.962646] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:48.460 [2024-07-23 00:27:02.962670] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:22:48.460 [2024-07-23 00:27:02.962686] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.836 ms 00:22:48.460 [2024-07-23 00:27:02.962695] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:48.460 [2024-07-23 00:27:02.962720] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:48.460 [2024-07-23 00:27:02.962739] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:22:48.460 [2024-07-23 00:27:02.962749] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:22:48.460 [2024-07-23 00:27:02.962759] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:48.460 [2024-07-23 00:27:02.962793] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:22:48.460 [2024-07-23 00:27:02.962817] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:48.460 [2024-07-23 00:27:02.962827] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:22:48.460 [2024-07-23 00:27:02.962850] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:22:48.460 [2024-07-23 00:27:02.962859] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:48.460 [2024-07-23 00:27:02.966382] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:48.460 [2024-07-23 00:27:02.966417] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:22:48.460 [2024-07-23 00:27:02.966440] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.510 ms 00:22:48.460 [2024-07-23 00:27:02.966457] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:48.460 [2024-07-23 00:27:02.966518] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:48.460 [2024-07-23 00:27:02.966530] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:22:48.460 [2024-07-23 00:27:02.966541] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:22:48.460 [2024-07-23 00:27:02.966555] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:48.460 [2024-07-23 00:27:02.967606] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 108.628 ms, result 0 00:23:25.373  Copying: 30/1024 [MB] (30 MBps) Copying: 57/1024 [MB] (27 MBps) Copying: 85/1024 [MB] (27 MBps) Copying: 113/1024 [MB] (27 MBps) Copying: 141/1024 [MB] (28 MBps) Copying: 169/1024 [MB] (28 MBps) Copying: 198/1024 [MB] (29 MBps) Copying: 226/1024 [MB] (27 MBps) Copying: 254/1024 [MB] (27 MBps) Copying: 282/1024 [MB] (28 MBps) Copying: 310/1024 [MB] (28 MBps) Copying: 339/1024 [MB] (28 MBps) Copying: 368/1024 [MB] (28 MBps) Copying: 396/1024 [MB] (28 MBps) Copying: 424/1024 [MB] (27 MBps) Copying: 452/1024 [MB] (27 MBps) Copying: 479/1024 [MB] (27 MBps) Copying: 507/1024 [MB] (27 MBps) Copying: 535/1024 [MB] (27 MBps) Copying: 562/1024 [MB] (26 MBps) Copying: 589/1024 [MB] (27 MBps) Copying: 616/1024 [MB] (26 MBps) Copying: 643/1024 [MB] (26 MBps) Copying: 671/1024 [MB] (27 MBps) Copying: 699/1024 [MB] (27 MBps) Copying: 727/1024 [MB] (27 MBps) Copying: 754/1024 [MB] (27 MBps) Copying: 782/1024 [MB] (28 MBps) Copying: 810/1024 [MB] (27 MBps) Copying: 837/1024 [MB] (27 MBps) Copying: 865/1024 [MB] (28 MBps) Copying: 894/1024 [MB] (28 MBps) Copying: 923/1024 [MB] (29 MBps) Copying: 952/1024 [MB] (28 MBps) Copying: 980/1024 [MB] (28 MBps) Copying: 1008/1024 [MB] (28 MBps) Copying: 1024/1024 [MB] (average 28 MBps)[2024-07-23 00:27:39.811164] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:25.373 [2024-07-23 00:27:39.811685] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:23:25.373 [2024-07-23 00:27:39.811967] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:23:25.373 [2024-07-23 00:27:39.812190] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:25.373 [2024-07-23 00:27:39.812348] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:23:25.373 [2024-07-23 00:27:39.813830] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:25.373 [2024-07-23 00:27:39.814031] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:23:25.373 [2024-07-23 00:27:39.814201] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.135 ms 00:23:25.373 [2024-07-23 00:27:39.814355] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:25.373 [2024-07-23 00:27:39.814931] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:25.373 [2024-07-23 00:27:39.815020] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:23:25.373 [2024-07-23 00:27:39.815307] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.478 ms 00:23:25.373 [2024-07-23 00:27:39.815349] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:25.373 [2024-07-23 00:27:39.821282] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:25.373 [2024-07-23 00:27:39.821342] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:23:25.373 [2024-07-23 00:27:39.821365] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.889 ms 00:23:25.373 [2024-07-23 00:27:39.821385] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:25.373 [2024-07-23 00:27:39.829037] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:25.373 [2024-07-23 00:27:39.829074] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:23:25.373 [2024-07-23 00:27:39.829090] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.631 ms 00:23:25.373 [2024-07-23 00:27:39.829110] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:25.373 [2024-07-23 00:27:39.830782] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:25.373 [2024-07-23 00:27:39.830824] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:23:25.373 [2024-07-23 00:27:39.830840] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.601 ms 00:23:25.373 [2024-07-23 00:27:39.830853] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:25.373 [2024-07-23 00:27:39.834655] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:25.373 [2024-07-23 00:27:39.834690] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:23:25.373 [2024-07-23 00:27:39.834702] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.771 ms 00:23:25.373 [2024-07-23 00:27:39.834712] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:25.373 [2024-07-23 00:27:39.838705] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:25.373 [2024-07-23 00:27:39.838751] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:23:25.373 [2024-07-23 00:27:39.838764] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.968 ms 00:23:25.373 [2024-07-23 00:27:39.838779] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:25.373 [2024-07-23 00:27:39.840772] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:25.373 [2024-07-23 00:27:39.840806] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:23:25.373 [2024-07-23 00:27:39.840817] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.979 ms 00:23:25.373 [2024-07-23 00:27:39.840827] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:25.373 [2024-07-23 00:27:39.842300] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:25.373 [2024-07-23 00:27:39.842331] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:23:25.373 [2024-07-23 00:27:39.842343] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.449 ms 00:23:25.373 [2024-07-23 00:27:39.842352] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:25.373 [2024-07-23 00:27:39.843574] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:25.373 [2024-07-23 00:27:39.843606] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:23:25.373 [2024-07-23 00:27:39.843618] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.197 ms 00:23:25.373 [2024-07-23 00:27:39.843627] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:25.373 [2024-07-23 00:27:39.844726] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:25.373 [2024-07-23 00:27:39.844758] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:23:25.373 [2024-07-23 00:27:39.844769] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.051 ms 00:23:25.373 [2024-07-23 00:27:39.844779] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:25.373 [2024-07-23 00:27:39.844803] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:23:25.373 [2024-07-23 00:27:39.844819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:23:25.373 [2024-07-23 00:27:39.844832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 3840 / 261120 wr_cnt: 1 state: open 00:23:25.373 [2024-07-23 00:27:39.844843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:23:25.373 [2024-07-23 00:27:39.844854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:23:25.373 [2024-07-23 00:27:39.844865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:23:25.373 [2024-07-23 00:27:39.844876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:23:25.373 [2024-07-23 00:27:39.844887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:23:25.373 [2024-07-23 00:27:39.844898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:23:25.373 [2024-07-23 00:27:39.844908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:23:25.373 [2024-07-23 00:27:39.844918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:23:25.373 [2024-07-23 00:27:39.844929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:23:25.373 [2024-07-23 00:27:39.844940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:23:25.373 [2024-07-23 00:27:39.844951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:23:25.373 [2024-07-23 00:27:39.844972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:23:25.373 [2024-07-23 00:27:39.844983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:23:25.373 [2024-07-23 00:27:39.844993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:23:25.374 [2024-07-23 00:27:39.845004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:23:25.374 [2024-07-23 00:27:39.845014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:23:25.374 [2024-07-23 00:27:39.845025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:23:25.374 [2024-07-23 00:27:39.845035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:23:25.374 [2024-07-23 00:27:39.845045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:23:25.374 [2024-07-23 00:27:39.845056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:23:25.374 [2024-07-23 00:27:39.845067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:23:25.374 [2024-07-23 00:27:39.845077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:23:25.374 [2024-07-23 00:27:39.845088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:23:25.374 [2024-07-23 00:27:39.845098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:23:25.374 [2024-07-23 00:27:39.845110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:23:25.374 [2024-07-23 00:27:39.845120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:23:25.374 [2024-07-23 00:27:39.845130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:23:25.374 [2024-07-23 00:27:39.845141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:23:25.374 [2024-07-23 00:27:39.845151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:23:25.374 [2024-07-23 00:27:39.845162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:23:25.374 [2024-07-23 00:27:39.845175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:23:25.374 [2024-07-23 00:27:39.845186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:23:25.374 [2024-07-23 00:27:39.845196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:23:25.374 [2024-07-23 00:27:39.845207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:23:25.374 [2024-07-23 00:27:39.845217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:23:25.374 [2024-07-23 00:27:39.845229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:23:25.374 [2024-07-23 00:27:39.845240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:23:25.374 [2024-07-23 00:27:39.845250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:23:25.374 [2024-07-23 00:27:39.845274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:23:25.374 [2024-07-23 00:27:39.845286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:23:25.374 [2024-07-23 00:27:39.845296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:23:25.374 [2024-07-23 00:27:39.845307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:23:25.374 [2024-07-23 00:27:39.845336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:23:25.374 [2024-07-23 00:27:39.845347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:23:25.374 [2024-07-23 00:27:39.845357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:23:25.374 [2024-07-23 00:27:39.845368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:23:25.374 [2024-07-23 00:27:39.845379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:23:25.374 [2024-07-23 00:27:39.845390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:23:25.374 [2024-07-23 00:27:39.845400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:23:25.374 [2024-07-23 00:27:39.845411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:23:25.374 [2024-07-23 00:27:39.845421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:23:25.374 [2024-07-23 00:27:39.845431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:23:25.374 [2024-07-23 00:27:39.845442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:23:25.374 [2024-07-23 00:27:39.845467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:23:25.374 [2024-07-23 00:27:39.845478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:23:25.374 [2024-07-23 00:27:39.845503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:23:25.374 [2024-07-23 00:27:39.845514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:23:25.374 [2024-07-23 00:27:39.845524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:23:25.374 [2024-07-23 00:27:39.845535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:23:25.374 [2024-07-23 00:27:39.845545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:23:25.374 [2024-07-23 00:27:39.845556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:23:25.374 [2024-07-23 00:27:39.845566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:23:25.374 [2024-07-23 00:27:39.845578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:23:25.374 [2024-07-23 00:27:39.845588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:23:25.374 [2024-07-23 00:27:39.845598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:23:25.374 [2024-07-23 00:27:39.845609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:23:25.374 [2024-07-23 00:27:39.845619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:23:25.374 [2024-07-23 00:27:39.845630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:23:25.374 [2024-07-23 00:27:39.845640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:23:25.374 [2024-07-23 00:27:39.845651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:23:25.374 [2024-07-23 00:27:39.845662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:23:25.374 [2024-07-23 00:27:39.845672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:23:25.374 [2024-07-23 00:27:39.845682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:23:25.374 [2024-07-23 00:27:39.845693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:23:25.374 [2024-07-23 00:27:39.845703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:23:25.374 [2024-07-23 00:27:39.845713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:23:25.374 [2024-07-23 00:27:39.845724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:23:25.374 [2024-07-23 00:27:39.845734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:23:25.374 [2024-07-23 00:27:39.845744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:23:25.374 [2024-07-23 00:27:39.845754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:23:25.374 [2024-07-23 00:27:39.845764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:23:25.374 [2024-07-23 00:27:39.845774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:23:25.374 [2024-07-23 00:27:39.845784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:23:25.374 [2024-07-23 00:27:39.845795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:23:25.374 [2024-07-23 00:27:39.845805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:23:25.374 [2024-07-23 00:27:39.845815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:23:25.374 [2024-07-23 00:27:39.845825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:23:25.374 [2024-07-23 00:27:39.845835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:23:25.374 [2024-07-23 00:27:39.845845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:23:25.374 [2024-07-23 00:27:39.845856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:23:25.374 [2024-07-23 00:27:39.845866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:23:25.374 [2024-07-23 00:27:39.845876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:23:25.374 [2024-07-23 00:27:39.845886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:23:25.375 [2024-07-23 00:27:39.845897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:23:25.375 [2024-07-23 00:27:39.845909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:23:25.375 [2024-07-23 00:27:39.845919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:23:25.375 [2024-07-23 00:27:39.845930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:23:25.375 [2024-07-23 00:27:39.845940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:23:25.375 [2024-07-23 00:27:39.845957] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:23:25.375 [2024-07-23 00:27:39.845967] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 13606ebe-bc72-4516-98d6-72c7480a66e3 00:23:25.375 [2024-07-23 00:27:39.845978] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 264960 00:23:25.375 [2024-07-23 00:27:39.845996] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:23:25.375 [2024-07-23 00:27:39.846005] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:23:25.375 [2024-07-23 00:27:39.846023] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:23:25.375 [2024-07-23 00:27:39.846033] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:23:25.375 [2024-07-23 00:27:39.846043] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:23:25.375 [2024-07-23 00:27:39.846057] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:23:25.375 [2024-07-23 00:27:39.846066] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:23:25.375 [2024-07-23 00:27:39.846074] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:23:25.375 [2024-07-23 00:27:39.846084] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:25.375 [2024-07-23 00:27:39.846101] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:23:25.375 [2024-07-23 00:27:39.846112] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.284 ms 00:23:25.375 [2024-07-23 00:27:39.846122] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:25.375 [2024-07-23 00:27:39.847789] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:25.375 [2024-07-23 00:27:39.847809] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:23:25.375 [2024-07-23 00:27:39.847821] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.653 ms 00:23:25.375 [2024-07-23 00:27:39.847831] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:25.375 [2024-07-23 00:27:39.847938] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:25.375 [2024-07-23 00:27:39.847949] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:23:25.375 [2024-07-23 00:27:39.847960] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.084 ms 00:23:25.375 [2024-07-23 00:27:39.847977] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:25.375 [2024-07-23 00:27:39.854049] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:25.375 [2024-07-23 00:27:39.854163] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:25.375 [2024-07-23 00:27:39.854236] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:25.375 [2024-07-23 00:27:39.854281] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:25.375 [2024-07-23 00:27:39.854350] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:25.375 [2024-07-23 00:27:39.854381] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:25.375 [2024-07-23 00:27:39.854410] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:25.375 [2024-07-23 00:27:39.854439] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:25.375 [2024-07-23 00:27:39.854527] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:25.375 [2024-07-23 00:27:39.854617] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:25.375 [2024-07-23 00:27:39.854698] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:25.375 [2024-07-23 00:27:39.854731] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:25.375 [2024-07-23 00:27:39.854768] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:25.375 [2024-07-23 00:27:39.854799] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:25.375 [2024-07-23 00:27:39.854828] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:25.375 [2024-07-23 00:27:39.854856] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:25.375 [2024-07-23 00:27:39.866344] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:25.375 [2024-07-23 00:27:39.866512] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:25.375 [2024-07-23 00:27:39.866595] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:25.375 [2024-07-23 00:27:39.866639] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:25.375 [2024-07-23 00:27:39.874843] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:25.375 [2024-07-23 00:27:39.874976] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:25.375 [2024-07-23 00:27:39.875075] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:25.375 [2024-07-23 00:27:39.875110] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:25.375 [2024-07-23 00:27:39.875182] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:25.375 [2024-07-23 00:27:39.875215] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:25.375 [2024-07-23 00:27:39.875245] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:25.375 [2024-07-23 00:27:39.875403] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:25.375 [2024-07-23 00:27:39.875467] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:25.375 [2024-07-23 00:27:39.875500] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:25.375 [2024-07-23 00:27:39.875530] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:25.375 [2024-07-23 00:27:39.875559] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:25.375 [2024-07-23 00:27:39.875669] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:25.375 [2024-07-23 00:27:39.875741] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:25.375 [2024-07-23 00:27:39.875771] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:25.375 [2024-07-23 00:27:39.875800] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:25.375 [2024-07-23 00:27:39.875856] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:25.375 [2024-07-23 00:27:39.875894] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:23:25.375 [2024-07-23 00:27:39.875971] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:25.375 [2024-07-23 00:27:39.876007] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:25.375 [2024-07-23 00:27:39.876069] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:25.375 [2024-07-23 00:27:39.876147] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:25.375 [2024-07-23 00:27:39.876216] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:25.375 [2024-07-23 00:27:39.876245] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:25.375 [2024-07-23 00:27:39.876331] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:25.375 [2024-07-23 00:27:39.876366] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:25.375 [2024-07-23 00:27:39.876397] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:25.375 [2024-07-23 00:27:39.876410] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:25.375 [2024-07-23 00:27:39.876536] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 65.481 ms, result 0 00:23:25.635 00:23:25.635 00:23:25.635 00:27:40 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@96 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:23:27.540 /home/vagrant/spdk_repo/spdk/test/ftl/testfile2: OK 00:23:27.540 00:27:41 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@98 -- # trap - SIGINT SIGTERM EXIT 00:23:27.540 00:27:41 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@99 -- # restore_kill 00:23:27.540 00:27:41 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@31 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:23:27.540 00:27:41 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@32 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:23:27.540 00:27:42 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@33 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:23:27.540 00:27:42 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@34 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:23:27.540 00:27:42 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@35 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:23:27.540 Process with pid 91356 is not found 00:23:27.540 00:27:42 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@37 -- # killprocess 91356 00:23:27.540 00:27:42 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@946 -- # '[' -z 91356 ']' 00:23:27.540 00:27:42 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@950 -- # kill -0 91356 00:23:27.540 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 950: kill: (91356) - No such process 00:23:27.540 00:27:42 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@973 -- # echo 'Process with pid 91356 is not found' 00:23:27.540 00:27:42 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@38 -- # rmmod nbd 00:23:27.799 Remove shared memory files 00:23:27.799 00:27:42 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@39 -- # remove_shm 00:23:27.799 00:27:42 ftl.ftl_dirty_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:23:27.799 00:27:42 ftl.ftl_dirty_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:23:27.799 00:27:42 ftl.ftl_dirty_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:23:27.799 00:27:42 ftl.ftl_dirty_shutdown -- ftl/common.sh@207 -- # rm -f rm -f 00:23:27.799 00:27:42 ftl.ftl_dirty_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:23:27.799 00:27:42 ftl.ftl_dirty_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:23:28.057 ************************************ 00:23:28.057 END TEST ftl_dirty_shutdown 00:23:28.057 ************************************ 00:23:28.057 00:23:28.057 real 3m17.395s 00:23:28.057 user 3m46.033s 00:23:28.057 sys 0m35.856s 00:23:28.057 00:27:42 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1122 -- # xtrace_disable 00:23:28.057 00:27:42 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:23:28.057 00:27:42 ftl -- ftl/ftl.sh@78 -- # run_test ftl_upgrade_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:23:28.057 00:27:42 ftl -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:23:28.057 00:27:42 ftl -- common/autotest_common.sh@1103 -- # xtrace_disable 00:23:28.058 00:27:42 ftl -- common/autotest_common.sh@10 -- # set +x 00:23:28.058 ************************************ 00:23:28.058 START TEST ftl_upgrade_shutdown 00:23:28.058 ************************************ 00:23:28.058 00:27:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:23:28.058 * Looking for test storage... 00:23:28.058 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:23:28.058 00:27:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:23:28.058 00:27:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 00:23:28.058 00:27:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:23:28.058 00:27:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:23:28.058 00:27:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:23:28.058 00:27:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:23:28.058 00:27:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:28.058 00:27:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:23:28.058 00:27:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:23:28.058 00:27:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:28.058 00:27:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:28.058 00:27:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:23:28.058 00:27:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:23:28.058 00:27:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:23:28.058 00:27:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:23:28.058 00:27:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:23:28.058 00:27:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:23:28.058 00:27:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:28.058 00:27:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:28.058 00:27:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:23:28.058 00:27:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:23:28.058 00:27:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:23:28.058 00:27:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:23:28.058 00:27:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:23:28.058 00:27:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:23:28.058 00:27:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:23:28.058 00:27:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:23:28.058 00:27:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:23:28.058 00:27:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:23:28.058 00:27:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@17 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:28.058 00:27:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # export FTL_BDEV=ftl 00:23:28.058 00:27:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # FTL_BDEV=ftl 00:23:28.058 00:27:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # export FTL_BASE=0000:00:11.0 00:23:28.058 00:27:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # FTL_BASE=0000:00:11.0 00:23:28.058 00:27:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # export FTL_BASE_SIZE=20480 00:23:28.058 00:27:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # FTL_BASE_SIZE=20480 00:23:28.058 00:27:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # export FTL_CACHE=0000:00:10.0 00:23:28.058 00:27:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # FTL_CACHE=0000:00:10.0 00:23:28.058 00:27:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # export FTL_CACHE_SIZE=5120 00:23:28.058 00:27:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # FTL_CACHE_SIZE=5120 00:23:28.058 00:27:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # export FTL_L2P_DRAM_LIMIT=2 00:23:28.058 00:27:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # FTL_L2P_DRAM_LIMIT=2 00:23:28.058 00:27:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@26 -- # tcp_target_setup 00:23:28.058 00:27:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:23:28.058 00:27:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:23:28.058 00:27:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:23:28.058 00:27:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=93517 00:23:28.058 00:27:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:23:28.058 00:27:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 93517 00:23:28.058 00:27:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' 00:23:28.058 00:27:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@827 -- # '[' -z 93517 ']' 00:23:28.058 00:27:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:28.058 00:27:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:28.058 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:28.058 00:27:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:28.058 00:27:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:28.058 00:27:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:23:28.317 [2024-07-23 00:27:42.793233] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:23:28.317 [2024-07-23 00:27:42.793375] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93517 ] 00:23:28.317 [2024-07-23 00:27:42.941926] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:28.317 [2024-07-23 00:27:42.990420] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:29.252 00:27:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:29.252 00:27:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@860 -- # return 0 00:23:29.252 00:27:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:23:29.252 00:27:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # params=('FTL_BDEV' 'FTL_BASE' 'FTL_BASE_SIZE' 'FTL_CACHE' 'FTL_CACHE_SIZE' 'FTL_L2P_DRAM_LIMIT') 00:23:29.252 00:27:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # local params 00:23:29.252 00:27:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:23:29.252 00:27:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z ftl ]] 00:23:29.252 00:27:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:23:29.252 00:27:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:11.0 ]] 00:23:29.252 00:27:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:23:29.252 00:27:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 20480 ]] 00:23:29.252 00:27:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:23:29.252 00:27:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:10.0 ]] 00:23:29.252 00:27:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:23:29.252 00:27:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 5120 ]] 00:23:29.252 00:27:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:23:29.252 00:27:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 2 ]] 00:23:29.252 00:27:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # create_base_bdev base 0000:00:11.0 20480 00:23:29.252 00:27:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@54 -- # local name=base 00:23:29.252 00:27:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:23:29.252 00:27:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@56 -- # local size=20480 00:23:29.252 00:27:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:23:29.252 00:27:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b base -t PCIe -a 0000:00:11.0 00:23:29.252 00:27:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # base_bdev=basen1 00:23:29.252 00:27:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@62 -- # local base_size 00:23:29.252 00:27:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # get_bdev_size basen1 00:23:29.252 00:27:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1374 -- # local bdev_name=basen1 00:23:29.252 00:27:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1375 -- # local bdev_info 00:23:29.252 00:27:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1376 -- # local bs 00:23:29.252 00:27:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1377 -- # local nb 00:23:29.252 00:27:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1378 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b basen1 00:23:29.511 00:27:44 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1378 -- # bdev_info='[ 00:23:29.511 { 00:23:29.511 "name": "basen1", 00:23:29.511 "aliases": [ 00:23:29.511 "a6526c54-e3a9-4552-a444-8c0b0cbc9bd7" 00:23:29.511 ], 00:23:29.511 "product_name": "NVMe disk", 00:23:29.511 "block_size": 4096, 00:23:29.511 "num_blocks": 1310720, 00:23:29.511 "uuid": "a6526c54-e3a9-4552-a444-8c0b0cbc9bd7", 00:23:29.511 "assigned_rate_limits": { 00:23:29.511 "rw_ios_per_sec": 0, 00:23:29.511 "rw_mbytes_per_sec": 0, 00:23:29.511 "r_mbytes_per_sec": 0, 00:23:29.511 "w_mbytes_per_sec": 0 00:23:29.511 }, 00:23:29.511 "claimed": true, 00:23:29.511 "claim_type": "read_many_write_one", 00:23:29.511 "zoned": false, 00:23:29.511 "supported_io_types": { 00:23:29.511 "read": true, 00:23:29.512 "write": true, 00:23:29.512 "unmap": true, 00:23:29.512 "write_zeroes": true, 00:23:29.512 "flush": true, 00:23:29.512 "reset": true, 00:23:29.512 "compare": true, 00:23:29.512 "compare_and_write": false, 00:23:29.512 "abort": true, 00:23:29.512 "nvme_admin": true, 00:23:29.512 "nvme_io": true 00:23:29.512 }, 00:23:29.512 "driver_specific": { 00:23:29.512 "nvme": [ 00:23:29.512 { 00:23:29.512 "pci_address": "0000:00:11.0", 00:23:29.512 "trid": { 00:23:29.512 "trtype": "PCIe", 00:23:29.512 "traddr": "0000:00:11.0" 00:23:29.512 }, 00:23:29.512 "ctrlr_data": { 00:23:29.512 "cntlid": 0, 00:23:29.512 "vendor_id": "0x1b36", 00:23:29.512 "model_number": "QEMU NVMe Ctrl", 00:23:29.512 "serial_number": "12341", 00:23:29.512 "firmware_revision": "8.0.0", 00:23:29.512 "subnqn": "nqn.2019-08.org.qemu:12341", 00:23:29.512 "oacs": { 00:23:29.512 "security": 0, 00:23:29.512 "format": 1, 00:23:29.512 "firmware": 0, 00:23:29.512 "ns_manage": 1 00:23:29.512 }, 00:23:29.512 "multi_ctrlr": false, 00:23:29.512 "ana_reporting": false 00:23:29.512 }, 00:23:29.512 "vs": { 00:23:29.512 "nvme_version": "1.4" 00:23:29.512 }, 00:23:29.512 "ns_data": { 00:23:29.512 "id": 1, 00:23:29.512 "can_share": false 00:23:29.512 } 00:23:29.512 } 00:23:29.512 ], 00:23:29.512 "mp_policy": "active_passive" 00:23:29.512 } 00:23:29.512 } 00:23:29.512 ]' 00:23:29.512 00:27:44 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1379 -- # jq '.[] .block_size' 00:23:29.512 00:27:44 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1379 -- # bs=4096 00:23:29.512 00:27:44 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1380 -- # jq '.[] .num_blocks' 00:23:29.512 00:27:44 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1380 -- # nb=1310720 00:23:29.512 00:27:44 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # bdev_size=5120 00:23:29.512 00:27:44 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # echo 5120 00:23:29.512 00:27:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:23:29.512 00:27:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@64 -- # [[ 20480 -le 5120 ]] 00:23:29.512 00:27:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:23:29.512 00:27:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:23:29.512 00:27:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:23:29.771 00:27:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # stores=0f2f8dab-a8ff-4851-87ad-068a9590fb27 00:23:29.771 00:27:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:23:29.771 00:27:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 0f2f8dab-a8ff-4851-87ad-068a9590fb27 00:23:30.031 00:27:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore basen1 lvs 00:23:30.031 00:27:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # lvs=1a8ac480-b00a-4220-8bab-2f2caae71c69 00:23:30.031 00:27:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create basen1p0 20480 -t -u 1a8ac480-b00a-4220-8bab-2f2caae71c69 00:23:30.290 00:27:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # base_bdev=2b4b8450-a84a-4ccf-90cb-f44d68e0ca90 00:23:30.290 00:27:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@108 -- # [[ -z 2b4b8450-a84a-4ccf-90cb-f44d68e0ca90 ]] 00:23:30.290 00:27:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # create_nv_cache_bdev cache 0000:00:10.0 2b4b8450-a84a-4ccf-90cb-f44d68e0ca90 5120 00:23:30.290 00:27:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@35 -- # local name=cache 00:23:30.290 00:27:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:23:30.290 00:27:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@37 -- # local base_bdev=2b4b8450-a84a-4ccf-90cb-f44d68e0ca90 00:23:30.290 00:27:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@38 -- # local cache_size=5120 00:23:30.290 00:27:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # get_bdev_size 2b4b8450-a84a-4ccf-90cb-f44d68e0ca90 00:23:30.290 00:27:44 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1374 -- # local bdev_name=2b4b8450-a84a-4ccf-90cb-f44d68e0ca90 00:23:30.290 00:27:44 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1375 -- # local bdev_info 00:23:30.290 00:27:44 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1376 -- # local bs 00:23:30.290 00:27:44 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1377 -- # local nb 00:23:30.290 00:27:44 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1378 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 2b4b8450-a84a-4ccf-90cb-f44d68e0ca90 00:23:30.549 00:27:45 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1378 -- # bdev_info='[ 00:23:30.549 { 00:23:30.549 "name": "2b4b8450-a84a-4ccf-90cb-f44d68e0ca90", 00:23:30.549 "aliases": [ 00:23:30.549 "lvs/basen1p0" 00:23:30.549 ], 00:23:30.549 "product_name": "Logical Volume", 00:23:30.549 "block_size": 4096, 00:23:30.549 "num_blocks": 5242880, 00:23:30.549 "uuid": "2b4b8450-a84a-4ccf-90cb-f44d68e0ca90", 00:23:30.549 "assigned_rate_limits": { 00:23:30.549 "rw_ios_per_sec": 0, 00:23:30.549 "rw_mbytes_per_sec": 0, 00:23:30.549 "r_mbytes_per_sec": 0, 00:23:30.549 "w_mbytes_per_sec": 0 00:23:30.549 }, 00:23:30.549 "claimed": false, 00:23:30.549 "zoned": false, 00:23:30.549 "supported_io_types": { 00:23:30.549 "read": true, 00:23:30.549 "write": true, 00:23:30.549 "unmap": true, 00:23:30.549 "write_zeroes": true, 00:23:30.549 "flush": false, 00:23:30.549 "reset": true, 00:23:30.549 "compare": false, 00:23:30.549 "compare_and_write": false, 00:23:30.549 "abort": false, 00:23:30.549 "nvme_admin": false, 00:23:30.549 "nvme_io": false 00:23:30.549 }, 00:23:30.549 "driver_specific": { 00:23:30.549 "lvol": { 00:23:30.549 "lvol_store_uuid": "1a8ac480-b00a-4220-8bab-2f2caae71c69", 00:23:30.549 "base_bdev": "basen1", 00:23:30.549 "thin_provision": true, 00:23:30.549 "num_allocated_clusters": 0, 00:23:30.549 "snapshot": false, 00:23:30.549 "clone": false, 00:23:30.549 "esnap_clone": false 00:23:30.549 } 00:23:30.549 } 00:23:30.549 } 00:23:30.549 ]' 00:23:30.549 00:27:45 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1379 -- # jq '.[] .block_size' 00:23:30.549 00:27:45 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1379 -- # bs=4096 00:23:30.549 00:27:45 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1380 -- # jq '.[] .num_blocks' 00:23:30.549 00:27:45 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1380 -- # nb=5242880 00:23:30.549 00:27:45 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # bdev_size=20480 00:23:30.549 00:27:45 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # echo 20480 00:23:30.549 00:27:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # local base_size=1024 00:23:30.549 00:27:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:23:30.549 00:27:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b cache -t PCIe -a 0000:00:10.0 00:23:30.808 00:27:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # nvc_bdev=cachen1 00:23:30.808 00:27:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@47 -- # [[ -z 5120 ]] 00:23:30.808 00:27:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create cachen1 -s 5120 1 00:23:31.068 00:27:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # cache_bdev=cachen1p0 00:23:31.068 00:27:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@114 -- # [[ -z cachen1p0 ]] 00:23:31.068 00:27:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@119 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 60 bdev_ftl_create -b ftl -d 2b4b8450-a84a-4ccf-90cb-f44d68e0ca90 -c cachen1p0 --l2p_dram_limit 2 00:23:31.068 [2024-07-23 00:27:45.721274] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:31.068 [2024-07-23 00:27:45.721333] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:23:31.068 [2024-07-23 00:27:45.721379] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:23:31.068 [2024-07-23 00:27:45.721390] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:31.068 [2024-07-23 00:27:45.721473] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:31.068 [2024-07-23 00:27:45.721487] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:23:31.068 [2024-07-23 00:27:45.721500] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.053 ms 00:23:31.068 [2024-07-23 00:27:45.721515] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:31.068 [2024-07-23 00:27:45.721542] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:23:31.068 [2024-07-23 00:27:45.721816] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:23:31.069 [2024-07-23 00:27:45.721845] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:31.069 [2024-07-23 00:27:45.721859] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:23:31.069 [2024-07-23 00:27:45.721879] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.311 ms 00:23:31.069 [2024-07-23 00:27:45.721889] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:31.069 [2024-07-23 00:27:45.721963] mngt/ftl_mngt_md.c: 568:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl] Create new FTL, UUID dcbc8d67-06ed-4176-9b05-4dbd609a1a6c 00:23:31.069 [2024-07-23 00:27:45.723413] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:31.069 [2024-07-23 00:27:45.723454] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Default-initialize superblock 00:23:31.069 [2024-07-23 00:27:45.723466] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.034 ms 00:23:31.069 [2024-07-23 00:27:45.723489] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:31.069 [2024-07-23 00:27:45.731000] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:31.069 [2024-07-23 00:27:45.731037] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:23:31.069 [2024-07-23 00:27:45.731049] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.464 ms 00:23:31.069 [2024-07-23 00:27:45.731060] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:31.069 [2024-07-23 00:27:45.731101] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:31.069 [2024-07-23 00:27:45.731119] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:23:31.069 [2024-07-23 00:27:45.731130] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.022 ms 00:23:31.069 [2024-07-23 00:27:45.731142] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:31.069 [2024-07-23 00:27:45.731195] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:31.069 [2024-07-23 00:27:45.731221] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:23:31.069 [2024-07-23 00:27:45.731230] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.012 ms 00:23:31.069 [2024-07-23 00:27:45.731242] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:31.069 [2024-07-23 00:27:45.731277] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:23:31.069 [2024-07-23 00:27:45.733114] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:31.069 [2024-07-23 00:27:45.733144] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:23:31.069 [2024-07-23 00:27:45.733159] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.856 ms 00:23:31.069 [2024-07-23 00:27:45.733168] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:31.069 [2024-07-23 00:27:45.733201] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:31.069 [2024-07-23 00:27:45.733211] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:23:31.069 [2024-07-23 00:27:45.733233] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:23:31.069 [2024-07-23 00:27:45.733243] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:31.069 [2024-07-23 00:27:45.733281] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 1 00:23:31.069 [2024-07-23 00:27:45.733427] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:23:31.069 [2024-07-23 00:27:45.733451] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:23:31.069 [2024-07-23 00:27:45.733464] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x168 bytes 00:23:31.069 [2024-07-23 00:27:45.733480] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:23:31.069 [2024-07-23 00:27:45.733491] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:23:31.069 [2024-07-23 00:27:45.733504] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:23:31.069 [2024-07-23 00:27:45.733514] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:23:31.069 [2024-07-23 00:27:45.733529] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:23:31.069 [2024-07-23 00:27:45.733539] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:23:31.069 [2024-07-23 00:27:45.733552] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:31.069 [2024-07-23 00:27:45.733562] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:23:31.069 [2024-07-23 00:27:45.733575] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.276 ms 00:23:31.069 [2024-07-23 00:27:45.733584] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:31.069 [2024-07-23 00:27:45.733657] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:31.069 [2024-07-23 00:27:45.733674] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:23:31.069 [2024-07-23 00:27:45.733690] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.052 ms 00:23:31.069 [2024-07-23 00:27:45.733706] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:31.069 [2024-07-23 00:27:45.733805] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:23:31.069 [2024-07-23 00:27:45.733824] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:23:31.069 [2024-07-23 00:27:45.733838] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:23:31.069 [2024-07-23 00:27:45.733854] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:23:31.069 [2024-07-23 00:27:45.733867] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:23:31.069 [2024-07-23 00:27:45.733877] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:23:31.069 [2024-07-23 00:27:45.733889] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:23:31.069 [2024-07-23 00:27:45.733898] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:23:31.069 [2024-07-23 00:27:45.733909] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:23:31.069 [2024-07-23 00:27:45.733919] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:23:31.069 [2024-07-23 00:27:45.733930] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:23:31.069 [2024-07-23 00:27:45.733939] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:23:31.069 [2024-07-23 00:27:45.733951] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:23:31.069 [2024-07-23 00:27:45.733960] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:23:31.069 [2024-07-23 00:27:45.733974] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:23:31.069 [2024-07-23 00:27:45.733984] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:23:31.069 [2024-07-23 00:27:45.733995] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:23:31.069 [2024-07-23 00:27:45.734004] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:23:31.069 [2024-07-23 00:27:45.734016] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:23:31.069 [2024-07-23 00:27:45.734025] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:23:31.069 [2024-07-23 00:27:45.734036] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:23:31.069 [2024-07-23 00:27:45.734045] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:23:31.069 [2024-07-23 00:27:45.734057] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:23:31.069 [2024-07-23 00:27:45.734066] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:23:31.069 [2024-07-23 00:27:45.734079] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:23:31.069 [2024-07-23 00:27:45.734088] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:23:31.069 [2024-07-23 00:27:45.734099] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:23:31.069 [2024-07-23 00:27:45.734108] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:23:31.069 [2024-07-23 00:27:45.734120] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:23:31.069 [2024-07-23 00:27:45.734128] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:23:31.069 [2024-07-23 00:27:45.734142] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:23:31.069 [2024-07-23 00:27:45.734153] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:23:31.069 [2024-07-23 00:27:45.734164] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:23:31.069 [2024-07-23 00:27:45.734174] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:23:31.069 [2024-07-23 00:27:45.734185] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:23:31.069 [2024-07-23 00:27:45.734194] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:23:31.069 [2024-07-23 00:27:45.734206] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:23:31.069 [2024-07-23 00:27:45.734215] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:23:31.069 [2024-07-23 00:27:45.734227] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:23:31.069 [2024-07-23 00:27:45.734236] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:23:31.069 [2024-07-23 00:27:45.734247] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:23:31.069 [2024-07-23 00:27:45.734256] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:23:31.069 [2024-07-23 00:27:45.734280] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:23:31.069 [2024-07-23 00:27:45.734289] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:23:31.069 [2024-07-23 00:27:45.734302] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:23:31.069 [2024-07-23 00:27:45.734311] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:23:31.069 [2024-07-23 00:27:45.734326] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:23:31.069 [2024-07-23 00:27:45.734347] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:23:31.069 [2024-07-23 00:27:45.734359] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:23:31.069 [2024-07-23 00:27:45.734369] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:23:31.069 [2024-07-23 00:27:45.734381] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:23:31.069 [2024-07-23 00:27:45.734390] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:23:31.069 [2024-07-23 00:27:45.734402] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:23:31.069 [2024-07-23 00:27:45.734415] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:23:31.069 [2024-07-23 00:27:45.734429] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:31.069 [2024-07-23 00:27:45.734444] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:23:31.069 [2024-07-23 00:27:45.734457] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:23:31.070 [2024-07-23 00:27:45.734467] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:23:31.070 [2024-07-23 00:27:45.734480] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:23:31.070 [2024-07-23 00:27:45.734498] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:23:31.070 [2024-07-23 00:27:45.734512] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:23:31.070 [2024-07-23 00:27:45.734522] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:23:31.070 [2024-07-23 00:27:45.734537] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:23:31.070 [2024-07-23 00:27:45.734548] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:23:31.070 [2024-07-23 00:27:45.734561] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:23:31.070 [2024-07-23 00:27:45.734572] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:23:31.070 [2024-07-23 00:27:45.734584] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:23:31.070 [2024-07-23 00:27:45.734594] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:23:31.070 [2024-07-23 00:27:45.734609] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:23:31.070 [2024-07-23 00:27:45.734618] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:23:31.070 [2024-07-23 00:27:45.734632] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:31.070 [2024-07-23 00:27:45.734643] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:23:31.070 [2024-07-23 00:27:45.734656] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:23:31.070 [2024-07-23 00:27:45.734666] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:23:31.070 [2024-07-23 00:27:45.734679] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:23:31.070 [2024-07-23 00:27:45.734690] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:31.070 [2024-07-23 00:27:45.734702] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:23:31.070 [2024-07-23 00:27:45.734711] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.941 ms 00:23:31.070 [2024-07-23 00:27:45.734726] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:31.070 [2024-07-23 00:27:45.734770] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:23:31.070 [2024-07-23 00:27:45.734785] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:23:34.401 [2024-07-23 00:27:48.999643] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:34.401 [2024-07-23 00:27:48.999725] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:23:34.401 [2024-07-23 00:27:48.999741] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3270.171 ms 00:23:34.401 [2024-07-23 00:27:48.999753] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:34.401 [2024-07-23 00:27:49.010704] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:34.401 [2024-07-23 00:27:49.010751] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:23:34.401 [2024-07-23 00:27:49.010766] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 10.868 ms 00:23:34.401 [2024-07-23 00:27:49.010779] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:34.401 [2024-07-23 00:27:49.010821] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:34.401 [2024-07-23 00:27:49.010855] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:23:34.402 [2024-07-23 00:27:49.010866] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.013 ms 00:23:34.402 [2024-07-23 00:27:49.010878] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:34.402 [2024-07-23 00:27:49.021393] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:34.402 [2024-07-23 00:27:49.021437] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:23:34.402 [2024-07-23 00:27:49.021478] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 10.481 ms 00:23:34.402 [2024-07-23 00:27:49.021492] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:34.402 [2024-07-23 00:27:49.021530] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:34.402 [2024-07-23 00:27:49.021543] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:23:34.402 [2024-07-23 00:27:49.021563] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:23:34.402 [2024-07-23 00:27:49.021582] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:34.402 [2024-07-23 00:27:49.022044] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:34.402 [2024-07-23 00:27:49.022069] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:23:34.402 [2024-07-23 00:27:49.022080] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.411 ms 00:23:34.402 [2024-07-23 00:27:49.022093] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:34.402 [2024-07-23 00:27:49.022130] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:34.402 [2024-07-23 00:27:49.022149] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:23:34.402 [2024-07-23 00:27:49.022159] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.017 ms 00:23:34.402 [2024-07-23 00:27:49.022172] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:34.402 [2024-07-23 00:27:49.029453] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:34.402 [2024-07-23 00:27:49.029493] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:23:34.402 [2024-07-23 00:27:49.029507] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.272 ms 00:23:34.402 [2024-07-23 00:27:49.029519] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:34.402 [2024-07-23 00:27:49.037247] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:23:34.402 [2024-07-23 00:27:49.038320] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:34.402 [2024-07-23 00:27:49.038347] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:23:34.402 [2024-07-23 00:27:49.038362] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8.742 ms 00:23:34.402 [2024-07-23 00:27:49.038373] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:34.402 [2024-07-23 00:27:49.064173] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:34.402 [2024-07-23 00:27:49.064223] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear L2P 00:23:34.402 [2024-07-23 00:27:49.064247] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 25.806 ms 00:23:34.402 [2024-07-23 00:27:49.064285] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:34.402 [2024-07-23 00:27:49.064418] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:34.402 [2024-07-23 00:27:49.064436] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:23:34.402 [2024-07-23 00:27:49.064456] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.073 ms 00:23:34.402 [2024-07-23 00:27:49.064481] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:34.402 [2024-07-23 00:27:49.067606] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:34.402 [2024-07-23 00:27:49.067646] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial band info metadata 00:23:34.402 [2024-07-23 00:27:49.067661] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.096 ms 00:23:34.402 [2024-07-23 00:27:49.067690] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:34.402 [2024-07-23 00:27:49.070656] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:34.402 [2024-07-23 00:27:49.070688] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial chunk info metadata 00:23:34.402 [2024-07-23 00:27:49.070703] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.928 ms 00:23:34.402 [2024-07-23 00:27:49.070712] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:34.402 [2024-07-23 00:27:49.070971] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:34.402 [2024-07-23 00:27:49.070985] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:23:34.402 [2024-07-23 00:27:49.071008] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.222 ms 00:23:34.402 [2024-07-23 00:27:49.071018] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:34.661 [2024-07-23 00:27:49.108974] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:34.662 [2024-07-23 00:27:49.109022] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Wipe P2L region 00:23:34.662 [2024-07-23 00:27:49.109039] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 37.948 ms 00:23:34.662 [2024-07-23 00:27:49.109053] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:34.662 [2024-07-23 00:27:49.113452] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:34.662 [2024-07-23 00:27:49.113487] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim map 00:23:34.662 [2024-07-23 00:27:49.113503] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.362 ms 00:23:34.662 [2024-07-23 00:27:49.113512] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:34.662 [2024-07-23 00:27:49.116648] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:34.662 [2024-07-23 00:27:49.116680] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim log 00:23:34.662 [2024-07-23 00:27:49.116695] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.098 ms 00:23:34.662 [2024-07-23 00:27:49.116704] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:34.662 [2024-07-23 00:27:49.120108] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:34.662 [2024-07-23 00:27:49.120143] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:23:34.662 [2024-07-23 00:27:49.120159] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.368 ms 00:23:34.662 [2024-07-23 00:27:49.120168] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:34.662 [2024-07-23 00:27:49.120215] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:34.662 [2024-07-23 00:27:49.120234] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:23:34.662 [2024-07-23 00:27:49.120247] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:23:34.662 [2024-07-23 00:27:49.120279] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:34.662 [2024-07-23 00:27:49.120344] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:34.662 [2024-07-23 00:27:49.120355] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:23:34.662 [2024-07-23 00:27:49.120375] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.032 ms 00:23:34.662 [2024-07-23 00:27:49.120385] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:34.662 [2024-07-23 00:27:49.121541] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 3405.358 ms, result 0 00:23:34.662 { 00:23:34.662 "name": "ftl", 00:23:34.662 "uuid": "dcbc8d67-06ed-4176-9b05-4dbd609a1a6c" 00:23:34.662 } 00:23:34.662 00:27:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport --trtype TCP 00:23:34.662 [2024-07-23 00:27:49.321762] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:34.662 00:27:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@122 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2018-09.io.spdk:cnode0 -a -m 1 00:23:34.921 00:27:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@123 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2018-09.io.spdk:cnode0 ftl 00:23:35.180 [2024-07-23 00:27:49.673529] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:23:35.180 00:27:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@124 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2018-09.io.spdk:cnode0 -t TCP -f ipv4 -s 4420 -a 127.0.0.1 00:23:35.439 [2024-07-23 00:27:49.865706] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:23:35.439 00:27:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:23:35.698 00:27:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@28 -- # size=1073741824 00:23:35.698 00:27:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@29 -- # seek=0 00:23:35.698 00:27:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@30 -- # skip=0 00:23:35.698 00:27:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@31 -- # bs=1048576 00:23:35.698 00:27:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@32 -- # count=1024 00:23:35.698 00:27:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@33 -- # iterations=2 00:23:35.698 00:27:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@34 -- # qd=2 00:23:35.698 00:27:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@35 -- # sums=() 00:23:35.698 00:27:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i = 0 )) 00:23:35.698 00:27:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:23:35.698 00:27:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 1' 00:23:35.698 Fill FTL, iteration 1 00:23:35.698 00:27:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:23:35.698 00:27:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:23:35.698 00:27:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:23:35.698 00:27:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:23:35.698 00:27:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@157 -- # [[ -z ftl ]] 00:23:35.698 00:27:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@163 -- # spdk_ini_pid=93629 00:23:35.698 00:27:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@162 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock 00:23:35.698 00:27:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@164 -- # export spdk_ini_pid 00:23:35.698 00:27:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@165 -- # waitforlisten 93629 /var/tmp/spdk.tgt.sock 00:23:35.698 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock... 00:23:35.698 00:27:50 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@827 -- # '[' -z 93629 ']' 00:23:35.698 00:27:50 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.tgt.sock 00:23:35.698 00:27:50 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:35.698 00:27:50 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock...' 00:23:35.698 00:27:50 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:35.698 00:27:50 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:23:35.698 [2024-07-23 00:27:50.283297] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:23:35.698 [2024-07-23 00:27:50.283415] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93629 ] 00:23:35.958 [2024-07-23 00:27:50.426034] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:35.958 [2024-07-23 00:27:50.491283] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:36.526 00:27:51 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:36.526 00:27:51 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@860 -- # return 0 00:23:36.526 00:27:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@167 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock bdev_nvme_attach_controller -b ftl -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2018-09.io.spdk:cnode0 00:23:36.785 ftln1 00:23:36.785 00:27:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@171 -- # echo '{"subsystems": [' 00:23:36.785 00:27:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@172 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock save_subsystem_config -n bdev 00:23:37.045 00:27:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@173 -- # echo ']}' 00:23:37.045 00:27:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@176 -- # killprocess 93629 00:23:37.045 00:27:51 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@946 -- # '[' -z 93629 ']' 00:23:37.045 00:27:51 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@950 -- # kill -0 93629 00:23:37.045 00:27:51 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@951 -- # uname 00:23:37.045 00:27:51 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:37.045 00:27:51 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 93629 00:23:37.045 killing process with pid 93629 00:23:37.045 00:27:51 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:23:37.045 00:27:51 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:23:37.045 00:27:51 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # echo 'killing process with pid 93629' 00:23:37.045 00:27:51 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@965 -- # kill 93629 00:23:37.045 00:27:51 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@970 -- # wait 93629 00:23:37.304 00:27:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@177 -- # unset spdk_ini_pid 00:23:37.304 00:27:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:23:37.564 [2024-07-23 00:27:52.052384] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:23:37.564 [2024-07-23 00:27:52.052519] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93660 ] 00:23:37.564 [2024-07-23 00:27:52.203562] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:37.823 [2024-07-23 00:27:52.249520] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:42.283  Copying: 252/1024 [MB] (252 MBps) Copying: 505/1024 [MB] (253 MBps) Copying: 760/1024 [MB] (255 MBps) Copying: 1011/1024 [MB] (251 MBps) Copying: 1024/1024 [MB] (average 252 MBps) 00:23:42.283 00:23:42.283 Calculate MD5 checksum, iteration 1 00:23:42.283 00:27:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=1024 00:23:42.283 00:27:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 1' 00:23:42.283 00:27:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:23:42.283 00:27:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:23:42.283 00:27:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:23:42.283 00:27:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:23:42.283 00:27:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:23:42.283 00:27:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:23:42.283 [2024-07-23 00:27:56.827910] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:23:42.283 [2024-07-23 00:27:56.828257] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93707 ] 00:23:42.542 [2024-07-23 00:27:56.980528] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:42.542 [2024-07-23 00:27:57.025797] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:44.437  Copying: 712/1024 [MB] (712 MBps) Copying: 1024/1024 [MB] (average 690 MBps) 00:23:44.437 00:23:44.437 00:27:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=1024 00:23:44.437 00:27:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:23:46.343 00:28:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:23:46.343 Fill FTL, iteration 2 00:23:46.343 00:28:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=f00750813746db45513cbf7446d9c40f 00:23:46.343 00:28:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:23:46.343 00:28:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:23:46.343 00:28:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 2' 00:23:46.343 00:28:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:23:46.343 00:28:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:23:46.343 00:28:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:23:46.343 00:28:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:23:46.343 00:28:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:23:46.343 00:28:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:23:46.343 [2024-07-23 00:28:00.749827] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:23:46.343 [2024-07-23 00:28:00.749960] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93757 ] 00:23:46.343 [2024-07-23 00:28:00.900948] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:46.343 [2024-07-23 00:28:00.946696] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:50.792  Copying: 252/1024 [MB] (252 MBps) Copying: 505/1024 [MB] (253 MBps) Copying: 755/1024 [MB] (250 MBps) Copying: 1010/1024 [MB] (255 MBps) Copying: 1024/1024 [MB] (average 252 MBps) 00:23:50.792 00:23:50.792 Calculate MD5 checksum, iteration 2 00:23:50.792 00:28:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=2048 00:23:50.792 00:28:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 2' 00:23:50.792 00:28:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:23:50.792 00:28:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:23:50.792 00:28:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:23:50.792 00:28:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:23:50.792 00:28:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:23:50.792 00:28:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:23:51.051 [2024-07-23 00:28:05.530956] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:23:51.051 [2024-07-23 00:28:05.531090] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93810 ] 00:23:51.051 [2024-07-23 00:28:05.681211] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:51.051 [2024-07-23 00:28:05.727547] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:53.894  Copying: 705/1024 [MB] (705 MBps) Copying: 1024/1024 [MB] (average 692 MBps) 00:23:53.894 00:23:53.894 00:28:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=2048 00:23:53.894 00:28:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:23:55.804 00:28:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:23:55.804 00:28:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=9b224b0ae377b033c4d2e01c37e335e6 00:23:55.804 00:28:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:23:55.804 00:28:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:23:55.804 00:28:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:23:55.804 [2024-07-23 00:28:10.235501] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:55.804 [2024-07-23 00:28:10.235557] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:23:55.804 [2024-07-23 00:28:10.235575] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:23:55.804 [2024-07-23 00:28:10.235587] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:55.804 [2024-07-23 00:28:10.235613] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:55.804 [2024-07-23 00:28:10.235632] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:23:55.804 [2024-07-23 00:28:10.235646] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:23:55.804 [2024-07-23 00:28:10.235657] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:55.804 [2024-07-23 00:28:10.235678] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:55.804 [2024-07-23 00:28:10.235689] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:23:55.804 [2024-07-23 00:28:10.235699] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:23:55.804 [2024-07-23 00:28:10.235709] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:55.804 [2024-07-23 00:28:10.235771] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.264 ms, result 0 00:23:55.804 true 00:23:55.804 00:28:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:23:55.804 { 00:23:55.804 "name": "ftl", 00:23:55.804 "properties": [ 00:23:55.804 { 00:23:55.804 "name": "superblock_version", 00:23:55.804 "value": 5, 00:23:55.804 "read-only": true 00:23:55.804 }, 00:23:55.804 { 00:23:55.804 "name": "base_device", 00:23:55.804 "bands": [ 00:23:55.804 { 00:23:55.804 "id": 0, 00:23:55.804 "state": "FREE", 00:23:55.804 "validity": 0.0 00:23:55.804 }, 00:23:55.804 { 00:23:55.804 "id": 1, 00:23:55.804 "state": "FREE", 00:23:55.804 "validity": 0.0 00:23:55.804 }, 00:23:55.804 { 00:23:55.804 "id": 2, 00:23:55.804 "state": "FREE", 00:23:55.804 "validity": 0.0 00:23:55.804 }, 00:23:55.804 { 00:23:55.804 "id": 3, 00:23:55.804 "state": "FREE", 00:23:55.804 "validity": 0.0 00:23:55.804 }, 00:23:55.804 { 00:23:55.804 "id": 4, 00:23:55.804 "state": "FREE", 00:23:55.804 "validity": 0.0 00:23:55.804 }, 00:23:55.804 { 00:23:55.804 "id": 5, 00:23:55.804 "state": "FREE", 00:23:55.804 "validity": 0.0 00:23:55.804 }, 00:23:55.804 { 00:23:55.804 "id": 6, 00:23:55.804 "state": "FREE", 00:23:55.804 "validity": 0.0 00:23:55.804 }, 00:23:55.804 { 00:23:55.804 "id": 7, 00:23:55.804 "state": "FREE", 00:23:55.804 "validity": 0.0 00:23:55.804 }, 00:23:55.804 { 00:23:55.804 "id": 8, 00:23:55.804 "state": "FREE", 00:23:55.804 "validity": 0.0 00:23:55.804 }, 00:23:55.804 { 00:23:55.804 "id": 9, 00:23:55.804 "state": "FREE", 00:23:55.804 "validity": 0.0 00:23:55.804 }, 00:23:55.804 { 00:23:55.804 "id": 10, 00:23:55.804 "state": "FREE", 00:23:55.804 "validity": 0.0 00:23:55.804 }, 00:23:55.804 { 00:23:55.804 "id": 11, 00:23:55.804 "state": "FREE", 00:23:55.804 "validity": 0.0 00:23:55.804 }, 00:23:55.804 { 00:23:55.804 "id": 12, 00:23:55.804 "state": "FREE", 00:23:55.804 "validity": 0.0 00:23:55.804 }, 00:23:55.804 { 00:23:55.804 "id": 13, 00:23:55.804 "state": "FREE", 00:23:55.804 "validity": 0.0 00:23:55.804 }, 00:23:55.804 { 00:23:55.804 "id": 14, 00:23:55.804 "state": "FREE", 00:23:55.804 "validity": 0.0 00:23:55.804 }, 00:23:55.804 { 00:23:55.804 "id": 15, 00:23:55.804 "state": "FREE", 00:23:55.804 "validity": 0.0 00:23:55.804 }, 00:23:55.804 { 00:23:55.804 "id": 16, 00:23:55.804 "state": "FREE", 00:23:55.804 "validity": 0.0 00:23:55.804 }, 00:23:55.804 { 00:23:55.804 "id": 17, 00:23:55.804 "state": "FREE", 00:23:55.804 "validity": 0.0 00:23:55.804 } 00:23:55.804 ], 00:23:55.804 "read-only": true 00:23:55.804 }, 00:23:55.804 { 00:23:55.804 "name": "cache_device", 00:23:55.804 "type": "bdev", 00:23:55.804 "chunks": [ 00:23:55.804 { 00:23:55.804 "id": 0, 00:23:55.804 "state": "INACTIVE", 00:23:55.804 "utilization": 0.0 00:23:55.804 }, 00:23:55.804 { 00:23:55.804 "id": 1, 00:23:55.804 "state": "CLOSED", 00:23:55.804 "utilization": 1.0 00:23:55.804 }, 00:23:55.804 { 00:23:55.804 "id": 2, 00:23:55.804 "state": "CLOSED", 00:23:55.804 "utilization": 1.0 00:23:55.804 }, 00:23:55.804 { 00:23:55.804 "id": 3, 00:23:55.804 "state": "OPEN", 00:23:55.804 "utilization": 0.001953125 00:23:55.804 }, 00:23:55.804 { 00:23:55.804 "id": 4, 00:23:55.804 "state": "OPEN", 00:23:55.804 "utilization": 0.0 00:23:55.804 } 00:23:55.804 ], 00:23:55.804 "read-only": true 00:23:55.804 }, 00:23:55.804 { 00:23:55.804 "name": "verbose_mode", 00:23:55.804 "value": true, 00:23:55.804 "unit": "", 00:23:55.804 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:23:55.804 }, 00:23:55.804 { 00:23:55.804 "name": "prep_upgrade_on_shutdown", 00:23:55.804 "value": false, 00:23:55.804 "unit": "", 00:23:55.804 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:23:55.804 } 00:23:55.804 ] 00:23:55.804 } 00:23:55.804 00:28:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p prep_upgrade_on_shutdown -v true 00:23:56.064 [2024-07-23 00:28:10.622469] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:56.064 [2024-07-23 00:28:10.622525] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:23:56.064 [2024-07-23 00:28:10.622541] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:23:56.064 [2024-07-23 00:28:10.622552] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:56.064 [2024-07-23 00:28:10.622578] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:56.064 [2024-07-23 00:28:10.622589] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:23:56.064 [2024-07-23 00:28:10.622599] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:23:56.064 [2024-07-23 00:28:10.622609] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:56.064 [2024-07-23 00:28:10.622629] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:56.064 [2024-07-23 00:28:10.622639] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:23:56.064 [2024-07-23 00:28:10.622649] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:23:56.064 [2024-07-23 00:28:10.622658] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:56.064 [2024-07-23 00:28:10.622717] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.248 ms, result 0 00:23:56.064 true 00:23:56.064 00:28:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # ftl_get_properties 00:23:56.064 00:28:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:23:56.064 00:28:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:23:56.323 00:28:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # used=3 00:23:56.323 00:28:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@64 -- # [[ 3 -eq 0 ]] 00:23:56.323 00:28:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:23:56.582 [2024-07-23 00:28:11.006442] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:56.582 [2024-07-23 00:28:11.006498] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:23:56.582 [2024-07-23 00:28:11.006522] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:23:56.582 [2024-07-23 00:28:11.006533] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:56.582 [2024-07-23 00:28:11.006561] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:56.582 [2024-07-23 00:28:11.006572] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:23:56.582 [2024-07-23 00:28:11.006583] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:23:56.582 [2024-07-23 00:28:11.006593] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:56.582 [2024-07-23 00:28:11.006613] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:56.582 [2024-07-23 00:28:11.006623] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:23:56.582 [2024-07-23 00:28:11.006634] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:23:56.582 [2024-07-23 00:28:11.006643] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:56.582 [2024-07-23 00:28:11.006702] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.252 ms, result 0 00:23:56.582 true 00:23:56.582 00:28:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:23:56.582 { 00:23:56.582 "name": "ftl", 00:23:56.582 "properties": [ 00:23:56.582 { 00:23:56.582 "name": "superblock_version", 00:23:56.582 "value": 5, 00:23:56.582 "read-only": true 00:23:56.582 }, 00:23:56.582 { 00:23:56.582 "name": "base_device", 00:23:56.582 "bands": [ 00:23:56.582 { 00:23:56.582 "id": 0, 00:23:56.582 "state": "FREE", 00:23:56.582 "validity": 0.0 00:23:56.582 }, 00:23:56.582 { 00:23:56.582 "id": 1, 00:23:56.582 "state": "FREE", 00:23:56.582 "validity": 0.0 00:23:56.582 }, 00:23:56.582 { 00:23:56.582 "id": 2, 00:23:56.582 "state": "FREE", 00:23:56.582 "validity": 0.0 00:23:56.582 }, 00:23:56.582 { 00:23:56.582 "id": 3, 00:23:56.582 "state": "FREE", 00:23:56.582 "validity": 0.0 00:23:56.582 }, 00:23:56.582 { 00:23:56.582 "id": 4, 00:23:56.582 "state": "FREE", 00:23:56.582 "validity": 0.0 00:23:56.582 }, 00:23:56.582 { 00:23:56.582 "id": 5, 00:23:56.582 "state": "FREE", 00:23:56.582 "validity": 0.0 00:23:56.582 }, 00:23:56.583 { 00:23:56.583 "id": 6, 00:23:56.583 "state": "FREE", 00:23:56.583 "validity": 0.0 00:23:56.583 }, 00:23:56.583 { 00:23:56.583 "id": 7, 00:23:56.583 "state": "FREE", 00:23:56.583 "validity": 0.0 00:23:56.583 }, 00:23:56.583 { 00:23:56.583 "id": 8, 00:23:56.583 "state": "FREE", 00:23:56.583 "validity": 0.0 00:23:56.583 }, 00:23:56.583 { 00:23:56.583 "id": 9, 00:23:56.583 "state": "FREE", 00:23:56.583 "validity": 0.0 00:23:56.583 }, 00:23:56.583 { 00:23:56.583 "id": 10, 00:23:56.583 "state": "FREE", 00:23:56.583 "validity": 0.0 00:23:56.583 }, 00:23:56.583 { 00:23:56.583 "id": 11, 00:23:56.583 "state": "FREE", 00:23:56.583 "validity": 0.0 00:23:56.583 }, 00:23:56.583 { 00:23:56.583 "id": 12, 00:23:56.583 "state": "FREE", 00:23:56.583 "validity": 0.0 00:23:56.583 }, 00:23:56.583 { 00:23:56.583 "id": 13, 00:23:56.583 "state": "FREE", 00:23:56.583 "validity": 0.0 00:23:56.583 }, 00:23:56.583 { 00:23:56.583 "id": 14, 00:23:56.583 "state": "FREE", 00:23:56.583 "validity": 0.0 00:23:56.583 }, 00:23:56.583 { 00:23:56.583 "id": 15, 00:23:56.583 "state": "FREE", 00:23:56.583 "validity": 0.0 00:23:56.583 }, 00:23:56.583 { 00:23:56.583 "id": 16, 00:23:56.583 "state": "FREE", 00:23:56.583 "validity": 0.0 00:23:56.583 }, 00:23:56.583 { 00:23:56.583 "id": 17, 00:23:56.583 "state": "FREE", 00:23:56.583 "validity": 0.0 00:23:56.583 } 00:23:56.583 ], 00:23:56.583 "read-only": true 00:23:56.583 }, 00:23:56.583 { 00:23:56.583 "name": "cache_device", 00:23:56.583 "type": "bdev", 00:23:56.583 "chunks": [ 00:23:56.583 { 00:23:56.583 "id": 0, 00:23:56.583 "state": "INACTIVE", 00:23:56.583 "utilization": 0.0 00:23:56.583 }, 00:23:56.583 { 00:23:56.583 "id": 1, 00:23:56.583 "state": "CLOSED", 00:23:56.583 "utilization": 1.0 00:23:56.583 }, 00:23:56.583 { 00:23:56.583 "id": 2, 00:23:56.583 "state": "CLOSED", 00:23:56.583 "utilization": 1.0 00:23:56.583 }, 00:23:56.583 { 00:23:56.583 "id": 3, 00:23:56.583 "state": "OPEN", 00:23:56.583 "utilization": 0.001953125 00:23:56.583 }, 00:23:56.583 { 00:23:56.583 "id": 4, 00:23:56.583 "state": "OPEN", 00:23:56.583 "utilization": 0.0 00:23:56.583 } 00:23:56.583 ], 00:23:56.583 "read-only": true 00:23:56.583 }, 00:23:56.583 { 00:23:56.583 "name": "verbose_mode", 00:23:56.583 "value": true, 00:23:56.583 "unit": "", 00:23:56.583 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:23:56.583 }, 00:23:56.583 { 00:23:56.583 "name": "prep_upgrade_on_shutdown", 00:23:56.583 "value": true, 00:23:56.583 "unit": "", 00:23:56.583 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:23:56.583 } 00:23:56.583 ] 00:23:56.583 } 00:23:56.583 00:28:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@74 -- # tcp_target_shutdown 00:23:56.583 00:28:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 93517 ]] 00:23:56.583 00:28:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 93517 00:23:56.583 00:28:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@946 -- # '[' -z 93517 ']' 00:23:56.583 00:28:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@950 -- # kill -0 93517 00:23:56.583 00:28:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@951 -- # uname 00:23:56.583 00:28:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:56.583 00:28:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 93517 00:23:56.842 killing process with pid 93517 00:23:56.842 00:28:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:23:56.842 00:28:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:23:56.842 00:28:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # echo 'killing process with pid 93517' 00:23:56.842 00:28:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@965 -- # kill 93517 00:23:56.842 00:28:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@970 -- # wait 93517 00:23:56.842 [2024-07-23 00:28:11.414661] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:23:56.842 [2024-07-23 00:28:11.417738] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:56.842 [2024-07-23 00:28:11.417781] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:23:56.842 [2024-07-23 00:28:11.417797] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:23:56.842 [2024-07-23 00:28:11.417808] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:56.842 [2024-07-23 00:28:11.417841] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:23:56.842 [2024-07-23 00:28:11.418629] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:56.842 [2024-07-23 00:28:11.418685] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:23:56.842 [2024-07-23 00:28:11.418867] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.771 ms 00:23:56.842 [2024-07-23 00:28:11.418905] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:04.974 [2024-07-23 00:28:18.255978] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:04.974 [2024-07-23 00:28:18.256245] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:24:04.974 [2024-07-23 00:28:18.256282] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6848.108 ms 00:24:04.974 [2024-07-23 00:28:18.256294] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:04.974 [2024-07-23 00:28:18.257372] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:04.974 [2024-07-23 00:28:18.257396] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:24:04.974 [2024-07-23 00:28:18.257410] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.047 ms 00:24:04.974 [2024-07-23 00:28:18.257420] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:04.974 [2024-07-23 00:28:18.258355] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:04.974 [2024-07-23 00:28:18.258377] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:24:04.974 [2024-07-23 00:28:18.258401] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.907 ms 00:24:04.974 [2024-07-23 00:28:18.258411] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:04.974 [2024-07-23 00:28:18.260095] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:04.974 [2024-07-23 00:28:18.260132] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:24:04.974 [2024-07-23 00:28:18.260144] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.648 ms 00:24:04.974 [2024-07-23 00:28:18.260155] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:04.974 [2024-07-23 00:28:18.262372] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:04.974 [2024-07-23 00:28:18.262410] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:24:04.974 [2024-07-23 00:28:18.262432] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.190 ms 00:24:04.974 [2024-07-23 00:28:18.262443] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:04.974 [2024-07-23 00:28:18.262508] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:04.975 [2024-07-23 00:28:18.262520] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:24:04.975 [2024-07-23 00:28:18.262531] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.033 ms 00:24:04.975 [2024-07-23 00:28:18.262545] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:04.975 [2024-07-23 00:28:18.263814] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:04.975 [2024-07-23 00:28:18.263848] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: persist band info metadata 00:24:04.975 [2024-07-23 00:28:18.263860] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.253 ms 00:24:04.975 [2024-07-23 00:28:18.263869] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:04.975 [2024-07-23 00:28:18.265083] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:04.975 [2024-07-23 00:28:18.265116] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: persist trim metadata 00:24:04.975 [2024-07-23 00:28:18.265128] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.187 ms 00:24:04.975 [2024-07-23 00:28:18.265138] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:04.975 [2024-07-23 00:28:18.266276] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:04.975 [2024-07-23 00:28:18.266306] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:24:04.975 [2024-07-23 00:28:18.266317] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.097 ms 00:24:04.975 [2024-07-23 00:28:18.266326] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:04.975 [2024-07-23 00:28:18.267517] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:04.975 [2024-07-23 00:28:18.267550] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:24:04.975 [2024-07-23 00:28:18.267561] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.140 ms 00:24:04.975 [2024-07-23 00:28:18.267571] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:04.975 [2024-07-23 00:28:18.267597] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:24:04.975 [2024-07-23 00:28:18.267613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:24:04.975 [2024-07-23 00:28:18.267625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:24:04.975 [2024-07-23 00:28:18.267636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:24:04.975 [2024-07-23 00:28:18.267647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:24:04.975 [2024-07-23 00:28:18.267658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:24:04.975 [2024-07-23 00:28:18.267669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:24:04.975 [2024-07-23 00:28:18.267679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:24:04.975 [2024-07-23 00:28:18.267690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:24:04.975 [2024-07-23 00:28:18.267700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:24:04.975 [2024-07-23 00:28:18.267711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:24:04.975 [2024-07-23 00:28:18.267722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:24:04.975 [2024-07-23 00:28:18.267733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:24:04.975 [2024-07-23 00:28:18.267743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:24:04.975 [2024-07-23 00:28:18.267754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:24:04.975 [2024-07-23 00:28:18.267764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:24:04.975 [2024-07-23 00:28:18.267774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:24:04.975 [2024-07-23 00:28:18.267785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:24:04.975 [2024-07-23 00:28:18.267796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:24:04.975 [2024-07-23 00:28:18.267808] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:24:04.975 [2024-07-23 00:28:18.267818] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: dcbc8d67-06ed-4176-9b05-4dbd609a1a6c 00:24:04.975 [2024-07-23 00:28:18.267829] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:24:04.975 [2024-07-23 00:28:18.267839] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 786752 00:24:04.975 [2024-07-23 00:28:18.267849] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 524288 00:24:04.975 [2024-07-23 00:28:18.267859] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: 1.5006 00:24:04.975 [2024-07-23 00:28:18.267868] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:24:04.975 [2024-07-23 00:28:18.267878] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:24:04.975 [2024-07-23 00:28:18.267894] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:24:04.975 [2024-07-23 00:28:18.267903] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:24:04.975 [2024-07-23 00:28:18.267912] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:24:04.975 [2024-07-23 00:28:18.267922] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:04.975 [2024-07-23 00:28:18.267933] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:24:04.975 [2024-07-23 00:28:18.267944] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.326 ms 00:24:04.975 [2024-07-23 00:28:18.267953] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:04.975 [2024-07-23 00:28:18.269756] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:04.975 [2024-07-23 00:28:18.269786] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:24:04.975 [2024-07-23 00:28:18.269797] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.789 ms 00:24:04.975 [2024-07-23 00:28:18.269807] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:04.975 [2024-07-23 00:28:18.269914] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:04.975 [2024-07-23 00:28:18.269925] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:24:04.975 [2024-07-23 00:28:18.269937] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.083 ms 00:24:04.975 [2024-07-23 00:28:18.269946] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:04.975 [2024-07-23 00:28:18.276878] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:24:04.975 [2024-07-23 00:28:18.276908] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:24:04.975 [2024-07-23 00:28:18.276920] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:24:04.975 [2024-07-23 00:28:18.276930] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:04.975 [2024-07-23 00:28:18.276971] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:24:04.975 [2024-07-23 00:28:18.276983] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:24:04.975 [2024-07-23 00:28:18.276993] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:24:04.975 [2024-07-23 00:28:18.277003] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:04.975 [2024-07-23 00:28:18.277076] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:24:04.975 [2024-07-23 00:28:18.277089] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:24:04.975 [2024-07-23 00:28:18.277099] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:24:04.975 [2024-07-23 00:28:18.277109] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:04.975 [2024-07-23 00:28:18.277131] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:24:04.975 [2024-07-23 00:28:18.277142] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:24:04.975 [2024-07-23 00:28:18.277152] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:24:04.975 [2024-07-23 00:28:18.277162] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:04.975 [2024-07-23 00:28:18.290203] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:24:04.975 [2024-07-23 00:28:18.290246] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:24:04.975 [2024-07-23 00:28:18.290267] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:24:04.975 [2024-07-23 00:28:18.290278] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:04.975 [2024-07-23 00:28:18.298694] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:24:04.975 [2024-07-23 00:28:18.298748] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:24:04.975 [2024-07-23 00:28:18.298762] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:24:04.975 [2024-07-23 00:28:18.298785] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:04.975 [2024-07-23 00:28:18.298858] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:24:04.975 [2024-07-23 00:28:18.298870] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:24:04.975 [2024-07-23 00:28:18.298882] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:24:04.975 [2024-07-23 00:28:18.298892] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:04.975 [2024-07-23 00:28:18.298926] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:24:04.975 [2024-07-23 00:28:18.298944] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:24:04.975 [2024-07-23 00:28:18.298955] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:24:04.975 [2024-07-23 00:28:18.298965] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:04.975 [2024-07-23 00:28:18.299055] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:24:04.975 [2024-07-23 00:28:18.299078] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:24:04.975 [2024-07-23 00:28:18.299089] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:24:04.975 [2024-07-23 00:28:18.299098] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:04.975 [2024-07-23 00:28:18.299133] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:24:04.975 [2024-07-23 00:28:18.299145] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:24:04.975 [2024-07-23 00:28:18.299159] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:24:04.975 [2024-07-23 00:28:18.299169] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:04.975 [2024-07-23 00:28:18.299208] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:24:04.975 [2024-07-23 00:28:18.299223] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:24:04.975 [2024-07-23 00:28:18.299233] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:24:04.976 [2024-07-23 00:28:18.299243] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:04.976 [2024-07-23 00:28:18.299316] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:24:04.976 [2024-07-23 00:28:18.299332] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:24:04.976 [2024-07-23 00:28:18.299343] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:24:04.976 [2024-07-23 00:28:18.299353] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:04.976 [2024-07-23 00:28:18.299513] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 6892.892 ms, result 0 00:24:05.912 00:28:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:24:05.912 00:28:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@75 -- # tcp_target_setup 00:24:05.912 00:28:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:24:05.912 00:28:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:24:05.912 00:28:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:24:05.912 00:28:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=93967 00:24:05.912 00:28:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:24:05.912 00:28:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 93967 00:24:05.912 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:05.912 00:28:20 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@827 -- # '[' -z 93967 ']' 00:24:05.912 00:28:20 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:05.912 00:28:20 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@832 -- # local max_retries=100 00:24:05.912 00:28:20 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:05.912 00:28:20 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@836 -- # xtrace_disable 00:24:05.912 00:28:20 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:24:05.912 00:28:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:24:05.912 [2024-07-23 00:28:20.583944] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:24:05.912 [2024-07-23 00:28:20.584100] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93967 ] 00:24:06.173 [2024-07-23 00:28:20.733341] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:06.173 [2024-07-23 00:28:20.775178] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:06.435 [2024-07-23 00:28:21.064295] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:24:06.435 [2024-07-23 00:28:21.064366] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:24:06.695 [2024-07-23 00:28:21.201490] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:06.695 [2024-07-23 00:28:21.201541] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:24:06.695 [2024-07-23 00:28:21.201556] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:24:06.695 [2024-07-23 00:28:21.201567] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:06.695 [2024-07-23 00:28:21.201630] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:06.695 [2024-07-23 00:28:21.201645] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:24:06.695 [2024-07-23 00:28:21.201656] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.041 ms 00:24:06.695 [2024-07-23 00:28:21.201671] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:06.695 [2024-07-23 00:28:21.201695] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:24:06.695 [2024-07-23 00:28:21.201990] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:24:06.695 [2024-07-23 00:28:21.202011] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:06.695 [2024-07-23 00:28:21.202022] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:24:06.695 [2024-07-23 00:28:21.202033] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.321 ms 00:24:06.695 [2024-07-23 00:28:21.202043] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:06.695 [2024-07-23 00:28:21.203469] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:24:06.695 [2024-07-23 00:28:21.206005] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:06.695 [2024-07-23 00:28:21.206044] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:24:06.695 [2024-07-23 00:28:21.206062] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.541 ms 00:24:06.695 [2024-07-23 00:28:21.206079] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:06.695 [2024-07-23 00:28:21.206140] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:06.695 [2024-07-23 00:28:21.206153] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:24:06.695 [2024-07-23 00:28:21.206164] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.024 ms 00:24:06.695 [2024-07-23 00:28:21.206175] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:06.695 [2024-07-23 00:28:21.212821] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:06.695 [2024-07-23 00:28:21.212849] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:24:06.695 [2024-07-23 00:28:21.212861] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6.586 ms 00:24:06.695 [2024-07-23 00:28:21.212891] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:06.695 [2024-07-23 00:28:21.212939] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:06.695 [2024-07-23 00:28:21.212952] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:24:06.695 [2024-07-23 00:28:21.212977] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.026 ms 00:24:06.695 [2024-07-23 00:28:21.212997] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:06.695 [2024-07-23 00:28:21.213054] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:06.695 [2024-07-23 00:28:21.213066] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:24:06.695 [2024-07-23 00:28:21.213077] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.012 ms 00:24:06.696 [2024-07-23 00:28:21.213087] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:06.696 [2024-07-23 00:28:21.213114] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:24:06.696 [2024-07-23 00:28:21.214752] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:06.696 [2024-07-23 00:28:21.214782] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:24:06.696 [2024-07-23 00:28:21.214794] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.646 ms 00:24:06.696 [2024-07-23 00:28:21.214808] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:06.696 [2024-07-23 00:28:21.214849] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:06.696 [2024-07-23 00:28:21.214869] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:24:06.696 [2024-07-23 00:28:21.214880] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:24:06.696 [2024-07-23 00:28:21.214897] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:06.696 [2024-07-23 00:28:21.214930] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:24:06.696 [2024-07-23 00:28:21.214953] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:24:06.696 [2024-07-23 00:28:21.214988] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:24:06.696 [2024-07-23 00:28:21.215015] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x168 bytes 00:24:06.696 [2024-07-23 00:28:21.215105] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:24:06.696 [2024-07-23 00:28:21.215118] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:24:06.696 [2024-07-23 00:28:21.215131] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x168 bytes 00:24:06.696 [2024-07-23 00:28:21.215151] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:24:06.696 [2024-07-23 00:28:21.215163] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:24:06.696 [2024-07-23 00:28:21.215174] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:24:06.696 [2024-07-23 00:28:21.215199] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:24:06.696 [2024-07-23 00:28:21.215212] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:24:06.696 [2024-07-23 00:28:21.215222] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:24:06.696 [2024-07-23 00:28:21.215247] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:06.696 [2024-07-23 00:28:21.215258] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:24:06.696 [2024-07-23 00:28:21.215281] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.320 ms 00:24:06.696 [2024-07-23 00:28:21.215298] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:06.696 [2024-07-23 00:28:21.215375] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:06.696 [2024-07-23 00:28:21.215386] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:24:06.696 [2024-07-23 00:28:21.215396] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.051 ms 00:24:06.696 [2024-07-23 00:28:21.215413] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:06.696 [2024-07-23 00:28:21.215501] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:24:06.696 [2024-07-23 00:28:21.215527] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:24:06.696 [2024-07-23 00:28:21.215538] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:24:06.696 [2024-07-23 00:28:21.215548] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:24:06.696 [2024-07-23 00:28:21.215575] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:24:06.696 [2024-07-23 00:28:21.215584] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:24:06.696 [2024-07-23 00:28:21.215597] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:24:06.696 [2024-07-23 00:28:21.215606] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:24:06.696 [2024-07-23 00:28:21.215615] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:24:06.696 [2024-07-23 00:28:21.215624] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:24:06.696 [2024-07-23 00:28:21.215633] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:24:06.696 [2024-07-23 00:28:21.215642] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:24:06.696 [2024-07-23 00:28:21.215651] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:24:06.696 [2024-07-23 00:28:21.215661] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:24:06.696 [2024-07-23 00:28:21.215671] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:24:06.696 [2024-07-23 00:28:21.215680] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:24:06.696 [2024-07-23 00:28:21.215689] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:24:06.696 [2024-07-23 00:28:21.215699] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:24:06.696 [2024-07-23 00:28:21.215707] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:24:06.696 [2024-07-23 00:28:21.215717] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:24:06.696 [2024-07-23 00:28:21.215726] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:24:06.696 [2024-07-23 00:28:21.215735] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:24:06.696 [2024-07-23 00:28:21.215747] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:24:06.696 [2024-07-23 00:28:21.215756] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:24:06.696 [2024-07-23 00:28:21.215765] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:24:06.696 [2024-07-23 00:28:21.215774] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:24:06.696 [2024-07-23 00:28:21.215783] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:24:06.696 [2024-07-23 00:28:21.215792] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:24:06.696 [2024-07-23 00:28:21.215801] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:24:06.696 [2024-07-23 00:28:21.215810] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:24:06.696 [2024-07-23 00:28:21.215819] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:24:06.696 [2024-07-23 00:28:21.215828] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:24:06.696 [2024-07-23 00:28:21.215837] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:24:06.696 [2024-07-23 00:28:21.215846] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:24:06.696 [2024-07-23 00:28:21.215854] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:24:06.696 [2024-07-23 00:28:21.215864] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:24:06.696 [2024-07-23 00:28:21.215872] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:24:06.696 [2024-07-23 00:28:21.215881] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:24:06.696 [2024-07-23 00:28:21.215893] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:24:06.696 [2024-07-23 00:28:21.215902] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:24:06.696 [2024-07-23 00:28:21.215911] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:24:06.696 [2024-07-23 00:28:21.215920] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:24:06.696 [2024-07-23 00:28:21.215929] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:24:06.696 [2024-07-23 00:28:21.215937] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:24:06.696 [2024-07-23 00:28:21.215954] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:24:06.696 [2024-07-23 00:28:21.215964] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:24:06.696 [2024-07-23 00:28:21.215979] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:24:06.696 [2024-07-23 00:28:21.215996] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:24:06.696 [2024-07-23 00:28:21.216006] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:24:06.696 [2024-07-23 00:28:21.216015] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:24:06.696 [2024-07-23 00:28:21.216024] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:24:06.696 [2024-07-23 00:28:21.216033] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:24:06.696 [2024-07-23 00:28:21.216042] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:24:06.696 [2024-07-23 00:28:21.216052] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:24:06.696 [2024-07-23 00:28:21.216072] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:06.696 [2024-07-23 00:28:21.216083] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:24:06.696 [2024-07-23 00:28:21.216094] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:24:06.696 [2024-07-23 00:28:21.216104] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:24:06.696 [2024-07-23 00:28:21.216114] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:24:06.696 [2024-07-23 00:28:21.216124] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:24:06.696 [2024-07-23 00:28:21.216134] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:24:06.696 [2024-07-23 00:28:21.216144] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:24:06.696 [2024-07-23 00:28:21.216154] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:24:06.696 [2024-07-23 00:28:21.216164] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:24:06.696 [2024-07-23 00:28:21.216174] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:24:06.696 [2024-07-23 00:28:21.216185] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:24:06.696 [2024-07-23 00:28:21.216195] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:24:06.696 [2024-07-23 00:28:21.216205] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:24:06.696 [2024-07-23 00:28:21.216215] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:24:06.697 [2024-07-23 00:28:21.216225] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:24:06.697 [2024-07-23 00:28:21.216241] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:06.697 [2024-07-23 00:28:21.216253] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:24:06.697 [2024-07-23 00:28:21.216274] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:24:06.697 [2024-07-23 00:28:21.216285] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:24:06.697 [2024-07-23 00:28:21.216295] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:24:06.697 [2024-07-23 00:28:21.216306] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:06.697 [2024-07-23 00:28:21.216316] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:24:06.697 [2024-07-23 00:28:21.216326] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.856 ms 00:24:06.697 [2024-07-23 00:28:21.216341] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:06.697 [2024-07-23 00:28:21.216389] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:24:06.697 [2024-07-23 00:28:21.216402] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:24:09.987 [2024-07-23 00:28:24.505518] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:09.987 [2024-07-23 00:28:24.505584] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:24:09.987 [2024-07-23 00:28:24.505613] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3294.467 ms 00:24:09.987 [2024-07-23 00:28:24.505625] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:09.987 [2024-07-23 00:28:24.516358] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:09.987 [2024-07-23 00:28:24.516407] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:24:09.987 [2024-07-23 00:28:24.516422] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 10.651 ms 00:24:09.987 [2024-07-23 00:28:24.516447] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:09.987 [2024-07-23 00:28:24.516505] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:09.987 [2024-07-23 00:28:24.516524] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:24:09.987 [2024-07-23 00:28:24.516542] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.014 ms 00:24:09.987 [2024-07-23 00:28:24.516558] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:09.987 [2024-07-23 00:28:24.527014] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:09.987 [2024-07-23 00:28:24.527057] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:24:09.987 [2024-07-23 00:28:24.527089] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 10.418 ms 00:24:09.987 [2024-07-23 00:28:24.527099] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:09.987 [2024-07-23 00:28:24.527138] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:09.987 [2024-07-23 00:28:24.527154] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:24:09.987 [2024-07-23 00:28:24.527173] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:24:09.987 [2024-07-23 00:28:24.527183] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:09.987 [2024-07-23 00:28:24.527679] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:09.987 [2024-07-23 00:28:24.527705] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:24:09.987 [2024-07-23 00:28:24.527716] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.445 ms 00:24:09.987 [2024-07-23 00:28:24.527726] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:09.987 [2024-07-23 00:28:24.527766] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:09.987 [2024-07-23 00:28:24.527778] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:24:09.987 [2024-07-23 00:28:24.527793] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.018 ms 00:24:09.987 [2024-07-23 00:28:24.527803] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:09.987 [2024-07-23 00:28:24.535096] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:09.987 [2024-07-23 00:28:24.535133] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:24:09.987 [2024-07-23 00:28:24.535162] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.280 ms 00:24:09.987 [2024-07-23 00:28:24.535172] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:09.987 [2024-07-23 00:28:24.537806] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 0, empty chunks = 4 00:24:09.987 [2024-07-23 00:28:24.537848] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:24:09.987 [2024-07-23 00:28:24.537863] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:09.987 [2024-07-23 00:28:24.537874] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore NV cache metadata 00:24:09.987 [2024-07-23 00:28:24.537885] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.593 ms 00:24:09.987 [2024-07-23 00:28:24.537894] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:09.987 [2024-07-23 00:28:24.541266] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:09.987 [2024-07-23 00:28:24.541323] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid map metadata 00:24:09.987 [2024-07-23 00:28:24.541342] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.335 ms 00:24:09.987 [2024-07-23 00:28:24.541352] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:09.987 [2024-07-23 00:28:24.542731] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:09.987 [2024-07-23 00:28:24.542766] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore band info metadata 00:24:09.987 [2024-07-23 00:28:24.542778] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.337 ms 00:24:09.987 [2024-07-23 00:28:24.542787] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:09.987 [2024-07-23 00:28:24.544101] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:09.987 [2024-07-23 00:28:24.544134] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore trim metadata 00:24:09.987 [2024-07-23 00:28:24.544145] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.278 ms 00:24:09.987 [2024-07-23 00:28:24.544155] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:09.987 [2024-07-23 00:28:24.544451] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:09.987 [2024-07-23 00:28:24.544470] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:24:09.987 [2024-07-23 00:28:24.544482] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.229 ms 00:24:09.987 [2024-07-23 00:28:24.544495] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:09.987 [2024-07-23 00:28:24.585361] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:09.987 [2024-07-23 00:28:24.585427] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:24:09.987 [2024-07-23 00:28:24.585450] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 40.903 ms 00:24:09.987 [2024-07-23 00:28:24.585466] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:09.987 [2024-07-23 00:28:24.594011] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:24:09.987 [2024-07-23 00:28:24.594682] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:09.987 [2024-07-23 00:28:24.594710] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:24:09.987 [2024-07-23 00:28:24.594723] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 9.157 ms 00:24:09.987 [2024-07-23 00:28:24.594733] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:09.987 [2024-07-23 00:28:24.594804] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:09.987 [2024-07-23 00:28:24.594817] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P 00:24:09.987 [2024-07-23 00:28:24.594829] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:24:09.987 [2024-07-23 00:28:24.594840] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:09.987 [2024-07-23 00:28:24.594889] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:09.987 [2024-07-23 00:28:24.594905] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:24:09.987 [2024-07-23 00:28:24.594916] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.019 ms 00:24:09.987 [2024-07-23 00:28:24.594926] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:09.987 [2024-07-23 00:28:24.594950] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:09.987 [2024-07-23 00:28:24.594961] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:24:09.987 [2024-07-23 00:28:24.594972] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:24:09.987 [2024-07-23 00:28:24.594982] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:09.987 [2024-07-23 00:28:24.595016] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:24:09.987 [2024-07-23 00:28:24.595028] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:09.987 [2024-07-23 00:28:24.595038] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:24:09.987 [2024-07-23 00:28:24.595051] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.014 ms 00:24:09.987 [2024-07-23 00:28:24.595062] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:09.987 [2024-07-23 00:28:24.598312] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:09.987 [2024-07-23 00:28:24.598346] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:24:09.987 [2024-07-23 00:28:24.598360] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.234 ms 00:24:09.987 [2024-07-23 00:28:24.598370] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:09.987 [2024-07-23 00:28:24.598440] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:09.987 [2024-07-23 00:28:24.598463] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:24:09.987 [2024-07-23 00:28:24.598478] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.033 ms 00:24:09.987 [2024-07-23 00:28:24.598489] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:09.988 [2024-07-23 00:28:24.599657] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 3403.250 ms, result 0 00:24:09.988 [2024-07-23 00:28:24.615053] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:09.988 [2024-07-23 00:28:24.631030] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:24:09.988 [2024-07-23 00:28:24.639126] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:24:10.247 00:28:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:24:10.247 00:28:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@860 -- # return 0 00:24:10.247 00:28:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:24:10.247 00:28:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:24:10.247 00:28:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:24:10.507 [2024-07-23 00:28:24.934775] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:10.507 [2024-07-23 00:28:24.934830] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:24:10.507 [2024-07-23 00:28:24.934860] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:24:10.507 [2024-07-23 00:28:24.934871] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:10.507 [2024-07-23 00:28:24.934906] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:10.507 [2024-07-23 00:28:24.934919] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:24:10.507 [2024-07-23 00:28:24.934929] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:24:10.507 [2024-07-23 00:28:24.934939] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:10.507 [2024-07-23 00:28:24.934962] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:10.507 [2024-07-23 00:28:24.934973] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:24:10.507 [2024-07-23 00:28:24.934983] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:24:10.507 [2024-07-23 00:28:24.934992] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:10.507 [2024-07-23 00:28:24.935047] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.270 ms, result 0 00:24:10.507 true 00:24:10.507 00:28:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:24:10.507 { 00:24:10.507 "name": "ftl", 00:24:10.507 "properties": [ 00:24:10.507 { 00:24:10.507 "name": "superblock_version", 00:24:10.507 "value": 5, 00:24:10.507 "read-only": true 00:24:10.507 }, 00:24:10.507 { 00:24:10.507 "name": "base_device", 00:24:10.507 "bands": [ 00:24:10.507 { 00:24:10.507 "id": 0, 00:24:10.507 "state": "CLOSED", 00:24:10.507 "validity": 1.0 00:24:10.507 }, 00:24:10.507 { 00:24:10.507 "id": 1, 00:24:10.507 "state": "CLOSED", 00:24:10.507 "validity": 1.0 00:24:10.507 }, 00:24:10.507 { 00:24:10.507 "id": 2, 00:24:10.507 "state": "CLOSED", 00:24:10.507 "validity": 0.007843137254901933 00:24:10.507 }, 00:24:10.507 { 00:24:10.507 "id": 3, 00:24:10.507 "state": "FREE", 00:24:10.507 "validity": 0.0 00:24:10.507 }, 00:24:10.507 { 00:24:10.507 "id": 4, 00:24:10.507 "state": "FREE", 00:24:10.507 "validity": 0.0 00:24:10.507 }, 00:24:10.507 { 00:24:10.507 "id": 5, 00:24:10.507 "state": "FREE", 00:24:10.507 "validity": 0.0 00:24:10.507 }, 00:24:10.507 { 00:24:10.507 "id": 6, 00:24:10.507 "state": "FREE", 00:24:10.507 "validity": 0.0 00:24:10.507 }, 00:24:10.507 { 00:24:10.507 "id": 7, 00:24:10.507 "state": "FREE", 00:24:10.507 "validity": 0.0 00:24:10.507 }, 00:24:10.507 { 00:24:10.507 "id": 8, 00:24:10.507 "state": "FREE", 00:24:10.507 "validity": 0.0 00:24:10.507 }, 00:24:10.507 { 00:24:10.507 "id": 9, 00:24:10.507 "state": "FREE", 00:24:10.507 "validity": 0.0 00:24:10.507 }, 00:24:10.507 { 00:24:10.507 "id": 10, 00:24:10.507 "state": "FREE", 00:24:10.507 "validity": 0.0 00:24:10.507 }, 00:24:10.507 { 00:24:10.507 "id": 11, 00:24:10.507 "state": "FREE", 00:24:10.507 "validity": 0.0 00:24:10.507 }, 00:24:10.507 { 00:24:10.507 "id": 12, 00:24:10.507 "state": "FREE", 00:24:10.507 "validity": 0.0 00:24:10.507 }, 00:24:10.507 { 00:24:10.507 "id": 13, 00:24:10.507 "state": "FREE", 00:24:10.507 "validity": 0.0 00:24:10.507 }, 00:24:10.507 { 00:24:10.507 "id": 14, 00:24:10.507 "state": "FREE", 00:24:10.507 "validity": 0.0 00:24:10.507 }, 00:24:10.507 { 00:24:10.507 "id": 15, 00:24:10.507 "state": "FREE", 00:24:10.507 "validity": 0.0 00:24:10.507 }, 00:24:10.507 { 00:24:10.507 "id": 16, 00:24:10.507 "state": "FREE", 00:24:10.507 "validity": 0.0 00:24:10.507 }, 00:24:10.507 { 00:24:10.507 "id": 17, 00:24:10.507 "state": "FREE", 00:24:10.507 "validity": 0.0 00:24:10.507 } 00:24:10.507 ], 00:24:10.507 "read-only": true 00:24:10.507 }, 00:24:10.507 { 00:24:10.507 "name": "cache_device", 00:24:10.507 "type": "bdev", 00:24:10.507 "chunks": [ 00:24:10.507 { 00:24:10.507 "id": 0, 00:24:10.507 "state": "INACTIVE", 00:24:10.507 "utilization": 0.0 00:24:10.507 }, 00:24:10.507 { 00:24:10.507 "id": 1, 00:24:10.507 "state": "OPEN", 00:24:10.507 "utilization": 0.0 00:24:10.507 }, 00:24:10.507 { 00:24:10.507 "id": 2, 00:24:10.507 "state": "OPEN", 00:24:10.507 "utilization": 0.0 00:24:10.507 }, 00:24:10.507 { 00:24:10.507 "id": 3, 00:24:10.507 "state": "FREE", 00:24:10.507 "utilization": 0.0 00:24:10.507 }, 00:24:10.507 { 00:24:10.507 "id": 4, 00:24:10.507 "state": "FREE", 00:24:10.507 "utilization": 0.0 00:24:10.507 } 00:24:10.507 ], 00:24:10.507 "read-only": true 00:24:10.507 }, 00:24:10.507 { 00:24:10.507 "name": "verbose_mode", 00:24:10.507 "value": true, 00:24:10.507 "unit": "", 00:24:10.507 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:24:10.507 }, 00:24:10.507 { 00:24:10.507 "name": "prep_upgrade_on_shutdown", 00:24:10.507 "value": false, 00:24:10.507 "unit": "", 00:24:10.507 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:24:10.507 } 00:24:10.507 ] 00:24:10.507 } 00:24:10.507 00:28:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # ftl_get_properties 00:24:10.507 00:28:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:24:10.507 00:28:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:24:10.767 00:28:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # used=0 00:24:10.767 00:28:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@83 -- # [[ 0 -ne 0 ]] 00:24:10.767 00:28:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # jq '[.properties[] | select(.name == "bands") | .bands[] | select(.state == "OPENED")] | length' 00:24:10.767 00:28:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # ftl_get_properties 00:24:10.767 00:28:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:24:11.028 Validate MD5 checksum, iteration 1 00:24:11.028 00:28:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # opened=0 00:24:11.028 00:28:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@90 -- # [[ 0 -ne 0 ]] 00:24:11.028 00:28:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@111 -- # test_validate_checksum 00:24:11.028 00:28:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:24:11.028 00:28:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:24:11.028 00:28:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:24:11.028 00:28:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:24:11.028 00:28:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:24:11.028 00:28:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:24:11.028 00:28:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:24:11.028 00:28:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:24:11.028 00:28:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:24:11.028 00:28:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:24:11.028 [2024-07-23 00:28:25.618349] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:24:11.028 [2024-07-23 00:28:25.618499] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94036 ] 00:24:11.296 [2024-07-23 00:28:25.769599] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:11.296 [2024-07-23 00:28:25.816406] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:15.774  Copying: 734/1024 [MB] (734 MBps) Copying: 1024/1024 [MB] (average 725 MBps) 00:24:15.774 00:24:15.774 00:28:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:24:15.774 00:28:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:24:17.150 00:28:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:24:17.150 Validate MD5 checksum, iteration 2 00:24:17.150 00:28:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=f00750813746db45513cbf7446d9c40f 00:24:17.150 00:28:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ f00750813746db45513cbf7446d9c40f != \f\0\0\7\5\0\8\1\3\7\4\6\d\b\4\5\5\1\3\c\b\f\7\4\4\6\d\9\c\4\0\f ]] 00:24:17.150 00:28:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:24:17.150 00:28:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:24:17.150 00:28:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:24:17.150 00:28:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:24:17.150 00:28:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:24:17.150 00:28:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:24:17.150 00:28:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:24:17.150 00:28:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:24:17.150 00:28:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:24:17.150 [2024-07-23 00:28:31.757170] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:24:17.150 [2024-07-23 00:28:31.757475] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94103 ] 00:24:17.408 [2024-07-23 00:28:31.908887] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:17.408 [2024-07-23 00:28:31.953679] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:19.935  Copying: 742/1024 [MB] (742 MBps) Copying: 1024/1024 [MB] (average 727 MBps) 00:24:19.935 00:24:19.935 00:28:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:24:19.935 00:28:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:24:21.834 00:28:36 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:24:21.834 00:28:36 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=9b224b0ae377b033c4d2e01c37e335e6 00:24:21.834 00:28:36 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 9b224b0ae377b033c4d2e01c37e335e6 != \9\b\2\2\4\b\0\a\e\3\7\7\b\0\3\3\c\4\d\2\e\0\1\c\3\7\e\3\3\5\e\6 ]] 00:24:21.834 00:28:36 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:24:21.834 00:28:36 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:24:21.834 00:28:36 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@114 -- # tcp_target_shutdown_dirty 00:24:21.834 00:28:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@137 -- # [[ -n 93967 ]] 00:24:21.834 00:28:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@138 -- # kill -9 93967 00:24:21.834 00:28:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@139 -- # unset spdk_tgt_pid 00:24:21.834 00:28:36 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@115 -- # tcp_target_setup 00:24:21.834 00:28:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:24:21.834 00:28:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:24:21.834 00:28:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:24:21.834 00:28:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=94153 00:24:21.834 00:28:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:24:21.834 00:28:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:24:21.834 00:28:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 94153 00:24:21.834 00:28:36 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@827 -- # '[' -z 94153 ']' 00:24:21.834 00:28:36 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:21.834 00:28:36 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@832 -- # local max_retries=100 00:24:21.834 00:28:36 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:21.834 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:21.834 00:28:36 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@836 -- # xtrace_disable 00:24:21.834 00:28:36 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:24:21.834 [2024-07-23 00:28:36.187545] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:24:21.834 [2024-07-23 00:28:36.187677] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94153 ] 00:24:21.835 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 826: 93967 Killed $spdk_tgt_bin "--cpumask=$spdk_tgt_cpumask" --config="$spdk_tgt_cnfg" 00:24:21.835 [2024-07-23 00:28:36.336761] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:21.835 [2024-07-23 00:28:36.378604] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:22.092 [2024-07-23 00:28:36.674071] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:24:22.092 [2024-07-23 00:28:36.674163] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:24:22.352 [2024-07-23 00:28:36.811178] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:22.352 [2024-07-23 00:28:36.811221] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:24:22.352 [2024-07-23 00:28:36.811236] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:24:22.352 [2024-07-23 00:28:36.811246] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:22.352 [2024-07-23 00:28:36.811318] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:22.352 [2024-07-23 00:28:36.811331] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:24:22.352 [2024-07-23 00:28:36.811342] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.039 ms 00:24:22.352 [2024-07-23 00:28:36.811364] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:22.352 [2024-07-23 00:28:36.811387] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:24:22.352 [2024-07-23 00:28:36.811664] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:24:22.352 [2024-07-23 00:28:36.811682] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:22.352 [2024-07-23 00:28:36.811692] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:24:22.352 [2024-07-23 00:28:36.811703] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.299 ms 00:24:22.352 [2024-07-23 00:28:36.811713] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:22.352 [2024-07-23 00:28:36.812078] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:24:22.352 [2024-07-23 00:28:36.816216] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:22.352 [2024-07-23 00:28:36.816248] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:24:22.352 [2024-07-23 00:28:36.816271] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.146 ms 00:24:22.352 [2024-07-23 00:28:36.816286] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:22.352 [2024-07-23 00:28:36.817269] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:22.352 [2024-07-23 00:28:36.817295] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:24:22.352 [2024-07-23 00:28:36.817306] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.025 ms 00:24:22.352 [2024-07-23 00:28:36.817323] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:22.352 [2024-07-23 00:28:36.817720] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:22.352 [2024-07-23 00:28:36.817737] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:24:22.352 [2024-07-23 00:28:36.817748] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.337 ms 00:24:22.352 [2024-07-23 00:28:36.817757] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:22.352 [2024-07-23 00:28:36.817795] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:22.352 [2024-07-23 00:28:36.817808] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:24:22.352 [2024-07-23 00:28:36.817818] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.022 ms 00:24:22.352 [2024-07-23 00:28:36.817828] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:22.352 [2024-07-23 00:28:36.817860] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:22.352 [2024-07-23 00:28:36.817871] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:24:22.352 [2024-07-23 00:28:36.817881] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.010 ms 00:24:22.352 [2024-07-23 00:28:36.817890] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:22.352 [2024-07-23 00:28:36.817911] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:24:22.352 [2024-07-23 00:28:36.818686] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:22.352 [2024-07-23 00:28:36.818702] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:24:22.352 [2024-07-23 00:28:36.818717] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.780 ms 00:24:22.352 [2024-07-23 00:28:36.818738] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:22.352 [2024-07-23 00:28:36.818772] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:22.352 [2024-07-23 00:28:36.818783] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:24:22.352 [2024-07-23 00:28:36.818793] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:24:22.352 [2024-07-23 00:28:36.818802] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:22.352 [2024-07-23 00:28:36.818836] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:24:22.352 [2024-07-23 00:28:36.818858] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:24:22.352 [2024-07-23 00:28:36.818891] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:24:22.352 [2024-07-23 00:28:36.818916] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x168 bytes 00:24:22.352 [2024-07-23 00:28:36.818999] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:24:22.352 [2024-07-23 00:28:36.819012] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:24:22.352 [2024-07-23 00:28:36.819025] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x168 bytes 00:24:22.352 [2024-07-23 00:28:36.819038] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:24:22.352 [2024-07-23 00:28:36.819050] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:24:22.352 [2024-07-23 00:28:36.819061] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:24:22.352 [2024-07-23 00:28:36.819074] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:24:22.352 [2024-07-23 00:28:36.819090] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:24:22.352 [2024-07-23 00:28:36.819100] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:24:22.352 [2024-07-23 00:28:36.819113] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:22.352 [2024-07-23 00:28:36.819123] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:24:22.352 [2024-07-23 00:28:36.819133] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.280 ms 00:24:22.352 [2024-07-23 00:28:36.819144] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:22.352 [2024-07-23 00:28:36.819212] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:22.352 [2024-07-23 00:28:36.819223] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:24:22.352 [2024-07-23 00:28:36.819234] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.050 ms 00:24:22.352 [2024-07-23 00:28:36.819243] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:22.352 [2024-07-23 00:28:36.819352] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:24:22.352 [2024-07-23 00:28:36.819371] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:24:22.352 [2024-07-23 00:28:36.819382] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:24:22.352 [2024-07-23 00:28:36.819392] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:24:22.352 [2024-07-23 00:28:36.819405] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:24:22.352 [2024-07-23 00:28:36.819414] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:24:22.352 [2024-07-23 00:28:36.819426] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:24:22.352 [2024-07-23 00:28:36.819436] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:24:22.352 [2024-07-23 00:28:36.819445] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:24:22.352 [2024-07-23 00:28:36.819454] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:24:22.352 [2024-07-23 00:28:36.819463] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:24:22.352 [2024-07-23 00:28:36.819472] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:24:22.352 [2024-07-23 00:28:36.819482] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:24:22.352 [2024-07-23 00:28:36.819491] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:24:22.352 [2024-07-23 00:28:36.819500] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:24:22.352 [2024-07-23 00:28:36.819508] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:24:22.352 [2024-07-23 00:28:36.819517] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:24:22.352 [2024-07-23 00:28:36.819527] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:24:22.352 [2024-07-23 00:28:36.819536] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:24:22.352 [2024-07-23 00:28:36.819545] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:24:22.352 [2024-07-23 00:28:36.819557] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:24:22.352 [2024-07-23 00:28:36.819566] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:24:22.352 [2024-07-23 00:28:36.819575] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:24:22.352 [2024-07-23 00:28:36.819583] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:24:22.352 [2024-07-23 00:28:36.819592] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:24:22.352 [2024-07-23 00:28:36.819601] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:24:22.352 [2024-07-23 00:28:36.819610] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:24:22.352 [2024-07-23 00:28:36.819619] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:24:22.352 [2024-07-23 00:28:36.819627] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:24:22.352 [2024-07-23 00:28:36.819636] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:24:22.353 [2024-07-23 00:28:36.819645] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:24:22.353 [2024-07-23 00:28:36.819654] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:24:22.353 [2024-07-23 00:28:36.819663] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:24:22.353 [2024-07-23 00:28:36.819672] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:24:22.353 [2024-07-23 00:28:36.819681] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:24:22.353 [2024-07-23 00:28:36.819690] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:24:22.353 [2024-07-23 00:28:36.819701] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:24:22.353 [2024-07-23 00:28:36.819710] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:24:22.353 [2024-07-23 00:28:36.819720] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:24:22.353 [2024-07-23 00:28:36.819729] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:24:22.353 [2024-07-23 00:28:36.819738] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:24:22.353 [2024-07-23 00:28:36.819747] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:24:22.353 [2024-07-23 00:28:36.819756] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:24:22.353 [2024-07-23 00:28:36.819764] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:24:22.353 [2024-07-23 00:28:36.819774] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:24:22.353 [2024-07-23 00:28:36.819783] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:24:22.353 [2024-07-23 00:28:36.819793] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:24:22.353 [2024-07-23 00:28:36.819803] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:24:22.353 [2024-07-23 00:28:36.819812] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:24:22.353 [2024-07-23 00:28:36.819821] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:24:22.353 [2024-07-23 00:28:36.819830] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:24:22.353 [2024-07-23 00:28:36.819839] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:24:22.353 [2024-07-23 00:28:36.819854] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:24:22.353 [2024-07-23 00:28:36.819864] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:24:22.353 [2024-07-23 00:28:36.819876] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:22.353 [2024-07-23 00:28:36.819888] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:24:22.353 [2024-07-23 00:28:36.819898] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:24:22.353 [2024-07-23 00:28:36.819909] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:24:22.353 [2024-07-23 00:28:36.819919] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:24:22.353 [2024-07-23 00:28:36.819930] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:24:22.353 [2024-07-23 00:28:36.819940] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:24:22.353 [2024-07-23 00:28:36.819950] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:24:22.353 [2024-07-23 00:28:36.819961] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:24:22.353 [2024-07-23 00:28:36.819971] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:24:22.353 [2024-07-23 00:28:36.819982] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:24:22.353 [2024-07-23 00:28:36.819992] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:24:22.353 [2024-07-23 00:28:36.820002] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:24:22.353 [2024-07-23 00:28:36.820013] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:24:22.353 [2024-07-23 00:28:36.820026] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:24:22.353 [2024-07-23 00:28:36.820036] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:24:22.353 [2024-07-23 00:28:36.820048] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:22.353 [2024-07-23 00:28:36.820061] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:24:22.353 [2024-07-23 00:28:36.820071] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:24:22.353 [2024-07-23 00:28:36.820082] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:24:22.353 [2024-07-23 00:28:36.820092] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:24:22.353 [2024-07-23 00:28:36.820103] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:22.353 [2024-07-23 00:28:36.820113] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:24:22.353 [2024-07-23 00:28:36.820123] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.802 ms 00:24:22.353 [2024-07-23 00:28:36.820140] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:22.353 [2024-07-23 00:28:36.829401] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:22.353 [2024-07-23 00:28:36.829427] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:24:22.353 [2024-07-23 00:28:36.829450] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 9.225 ms 00:24:22.353 [2024-07-23 00:28:36.829468] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:22.353 [2024-07-23 00:28:36.829509] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:22.353 [2024-07-23 00:28:36.829524] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:24:22.353 [2024-07-23 00:28:36.829551] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.012 ms 00:24:22.353 [2024-07-23 00:28:36.829562] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:22.353 [2024-07-23 00:28:36.840003] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:22.353 [2024-07-23 00:28:36.840032] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:24:22.353 [2024-07-23 00:28:36.840049] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 10.396 ms 00:24:22.353 [2024-07-23 00:28:36.840066] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:22.353 [2024-07-23 00:28:36.840135] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:22.353 [2024-07-23 00:28:36.840148] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:24:22.353 [2024-07-23 00:28:36.840159] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:24:22.353 [2024-07-23 00:28:36.840169] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:22.353 [2024-07-23 00:28:36.840282] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:22.353 [2024-07-23 00:28:36.840295] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:24:22.353 [2024-07-23 00:28:36.840306] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.062 ms 00:24:22.353 [2024-07-23 00:28:36.840315] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:22.353 [2024-07-23 00:28:36.840353] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:22.353 [2024-07-23 00:28:36.840373] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:24:22.353 [2024-07-23 00:28:36.840390] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.017 ms 00:24:22.353 [2024-07-23 00:28:36.840406] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:22.353 [2024-07-23 00:28:36.847390] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:22.353 [2024-07-23 00:28:36.847431] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:24:22.353 [2024-07-23 00:28:36.847443] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6.959 ms 00:24:22.353 [2024-07-23 00:28:36.847453] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:22.353 [2024-07-23 00:28:36.847568] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:22.353 [2024-07-23 00:28:36.847582] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize recovery 00:24:22.353 [2024-07-23 00:28:36.847593] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.013 ms 00:24:22.353 [2024-07-23 00:28:36.847603] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:22.353 [2024-07-23 00:28:36.862983] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:22.353 [2024-07-23 00:28:36.863029] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover band state 00:24:22.353 [2024-07-23 00:28:36.863046] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 15.368 ms 00:24:22.353 [2024-07-23 00:28:36.863060] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:22.353 [2024-07-23 00:28:36.864630] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:22.353 [2024-07-23 00:28:36.864672] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:24:22.353 [2024-07-23 00:28:36.864688] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.298 ms 00:24:22.353 [2024-07-23 00:28:36.864711] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:22.353 [2024-07-23 00:28:36.885612] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:22.353 [2024-07-23 00:28:36.885677] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:24:22.353 [2024-07-23 00:28:36.885694] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 20.890 ms 00:24:22.353 [2024-07-23 00:28:36.885704] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:22.353 [2024-07-23 00:28:36.885867] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=0 found seq_id=8 00:24:22.353 [2024-07-23 00:28:36.885979] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=1 found seq_id=9 00:24:22.353 [2024-07-23 00:28:36.886074] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=2 found seq_id=12 00:24:22.353 [2024-07-23 00:28:36.886170] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=3 found seq_id=0 00:24:22.353 [2024-07-23 00:28:36.886182] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:22.353 [2024-07-23 00:28:36.886202] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Preprocess P2L checkpoints 00:24:22.353 [2024-07-23 00:28:36.886213] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.424 ms 00:24:22.354 [2024-07-23 00:28:36.886223] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:22.354 [2024-07-23 00:28:36.886283] mngt/ftl_mngt_recovery.c: 650:ftl_mngt_recovery_open_bands_p2l: *NOTICE*: [FTL][ftl] No more open bands to recover from P2L 00:24:22.354 [2024-07-23 00:28:36.886314] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:22.354 [2024-07-23 00:28:36.886332] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open bands P2L 00:24:22.354 [2024-07-23 00:28:36.886344] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.032 ms 00:24:22.354 [2024-07-23 00:28:36.886360] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:22.354 [2024-07-23 00:28:36.888911] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:22.354 [2024-07-23 00:28:36.888947] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover chunk state 00:24:22.354 [2024-07-23 00:28:36.888970] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.508 ms 00:24:22.354 [2024-07-23 00:28:36.888981] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:22.354 [2024-07-23 00:28:36.889580] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:22.354 [2024-07-23 00:28:36.889608] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover max seq ID 00:24:22.354 [2024-07-23 00:28:36.889620] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.017 ms 00:24:22.354 [2024-07-23 00:28:36.889629] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:22.354 [2024-07-23 00:28:36.889861] ftl_nv_cache.c:2471:ftl_mngt_nv_cache_recover_open_chunk: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 262144, seq id 14 00:24:22.921 [2024-07-23 00:28:37.431274] ftl_nv_cache.c:2408:recover_open_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 262144, seq id 14 00:24:22.921 [2024-07-23 00:28:37.431448] ftl_nv_cache.c:2471:ftl_mngt_nv_cache_recover_open_chunk: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 524288, seq id 15 00:24:23.490 [2024-07-23 00:28:37.977374] ftl_nv_cache.c:2408:recover_open_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 524288, seq id 15 00:24:23.490 [2024-07-23 00:28:37.977482] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 2, empty chunks = 2 00:24:23.490 [2024-07-23 00:28:37.977498] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:24:23.490 [2024-07-23 00:28:37.977513] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:23.490 [2024-07-23 00:28:37.977525] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open chunks P2L 00:24:23.490 [2024-07-23 00:28:37.977540] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1089.612 ms 00:24:23.490 [2024-07-23 00:28:37.977550] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:23.490 [2024-07-23 00:28:37.977584] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:23.490 [2024-07-23 00:28:37.977596] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize recovery 00:24:23.490 [2024-07-23 00:28:37.977607] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:24:23.490 [2024-07-23 00:28:37.977626] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:23.490 [2024-07-23 00:28:37.984518] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:24:23.490 [2024-07-23 00:28:37.984662] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:23.490 [2024-07-23 00:28:37.984675] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:24:23.490 [2024-07-23 00:28:37.984687] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.029 ms 00:24:23.490 [2024-07-23 00:28:37.984698] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:23.490 [2024-07-23 00:28:37.985309] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:23.490 [2024-07-23 00:28:37.985338] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P from shared memory 00:24:23.490 [2024-07-23 00:28:37.985350] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.546 ms 00:24:23.490 [2024-07-23 00:28:37.985360] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:23.490 [2024-07-23 00:28:37.987241] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:23.490 [2024-07-23 00:28:37.987267] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid maps counters 00:24:23.490 [2024-07-23 00:28:37.987294] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.864 ms 00:24:23.490 [2024-07-23 00:28:37.987303] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:23.490 [2024-07-23 00:28:37.987357] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:23.490 [2024-07-23 00:28:37.987374] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Complete trim transaction 00:24:23.490 [2024-07-23 00:28:37.987385] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:24:23.490 [2024-07-23 00:28:37.987396] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:23.490 [2024-07-23 00:28:37.987500] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:23.490 [2024-07-23 00:28:37.987514] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:24:23.490 [2024-07-23 00:28:37.987525] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.021 ms 00:24:23.490 [2024-07-23 00:28:37.987535] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:23.490 [2024-07-23 00:28:37.987571] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:23.490 [2024-07-23 00:28:37.987582] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:24:23.490 [2024-07-23 00:28:37.987596] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:24:23.490 [2024-07-23 00:28:37.987606] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:23.490 [2024-07-23 00:28:37.987637] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:24:23.490 [2024-07-23 00:28:37.987648] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:23.490 [2024-07-23 00:28:37.987665] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:24:23.490 [2024-07-23 00:28:37.987682] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.014 ms 00:24:23.490 [2024-07-23 00:28:37.987692] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:23.490 [2024-07-23 00:28:37.987739] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:23.490 [2024-07-23 00:28:37.987750] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:24:23.490 [2024-07-23 00:28:37.987760] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.029 ms 00:24:23.490 [2024-07-23 00:28:37.987773] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:23.490 [2024-07-23 00:28:37.988728] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 1179.061 ms, result 0 00:24:23.490 [2024-07-23 00:28:38.001065] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:23.490 [2024-07-23 00:28:38.017064] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:24:23.490 [2024-07-23 00:28:38.025146] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:24:24.057 00:28:38 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:24:24.057 00:28:38 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@860 -- # return 0 00:24:24.057 00:28:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:24:24.057 00:28:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:24:24.057 00:28:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@116 -- # test_validate_checksum 00:24:24.057 00:28:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:24:24.057 00:28:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:24:24.057 00:28:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:24:24.057 00:28:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:24:24.057 Validate MD5 checksum, iteration 1 00:24:24.057 00:28:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:24:24.057 00:28:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:24:24.058 00:28:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:24:24.058 00:28:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:24:24.058 00:28:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:24:24.058 00:28:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:24:24.316 [2024-07-23 00:28:38.756441] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:24:24.316 [2024-07-23 00:28:38.756575] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94182 ] 00:24:24.316 [2024-07-23 00:28:38.908162] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:24.316 [2024-07-23 00:28:38.953113] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:26.861  Copying: 728/1024 [MB] (728 MBps) Copying: 1024/1024 [MB] (average 708 MBps) 00:24:26.861 00:24:27.120 00:28:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:24:27.120 00:28:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:24:29.020 00:28:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:24:29.020 Validate MD5 checksum, iteration 2 00:24:29.020 00:28:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=f00750813746db45513cbf7446d9c40f 00:24:29.020 00:28:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ f00750813746db45513cbf7446d9c40f != \f\0\0\7\5\0\8\1\3\7\4\6\d\b\4\5\5\1\3\c\b\f\7\4\4\6\d\9\c\4\0\f ]] 00:24:29.020 00:28:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:24:29.020 00:28:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:24:29.020 00:28:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:24:29.020 00:28:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:24:29.020 00:28:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:24:29.020 00:28:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:24:29.020 00:28:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:24:29.020 00:28:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:24:29.020 00:28:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:24:29.020 [2024-07-23 00:28:43.368201] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:24:29.020 [2024-07-23 00:28:43.368497] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94241 ] 00:24:29.020 [2024-07-23 00:28:43.519652] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:29.020 [2024-07-23 00:28:43.562967] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:33.529  Copying: 726/1024 [MB] (726 MBps) Copying: 1024/1024 [MB] (average 720 MBps) 00:24:33.529 00:24:33.529 00:28:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:24:33.529 00:28:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:24:35.435 00:28:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:24:35.435 00:28:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=9b224b0ae377b033c4d2e01c37e335e6 00:24:35.435 00:28:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 9b224b0ae377b033c4d2e01c37e335e6 != \9\b\2\2\4\b\0\a\e\3\7\7\b\0\3\3\c\4\d\2\e\0\1\c\3\7\e\3\3\5\e\6 ]] 00:24:35.435 00:28:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:24:35.435 00:28:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:24:35.435 00:28:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@118 -- # trap - SIGINT SIGTERM EXIT 00:24:35.435 00:28:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@119 -- # cleanup 00:24:35.435 00:28:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@11 -- # trap - SIGINT SIGTERM EXIT 00:24:35.435 00:28:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file 00:24:35.435 00:28:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@13 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file.md5 00:24:35.435 00:28:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@14 -- # tcp_cleanup 00:24:35.435 00:28:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@193 -- # tcp_target_cleanup 00:24:35.435 00:28:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@144 -- # tcp_target_shutdown 00:24:35.435 00:28:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 94153 ]] 00:24:35.435 00:28:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 94153 00:24:35.435 00:28:49 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@946 -- # '[' -z 94153 ']' 00:24:35.435 00:28:49 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@950 -- # kill -0 94153 00:24:35.435 00:28:49 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@951 -- # uname 00:24:35.435 00:28:49 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:24:35.435 00:28:49 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 94153 00:24:35.435 killing process with pid 94153 00:24:35.435 00:28:49 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:24:35.435 00:28:49 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:24:35.435 00:28:49 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # echo 'killing process with pid 94153' 00:24:35.435 00:28:49 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@965 -- # kill 94153 00:24:35.435 00:28:49 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@970 -- # wait 94153 00:24:35.435 [2024-07-23 00:28:49.916915] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:24:35.435 [2024-07-23 00:28:49.921708] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:35.435 [2024-07-23 00:28:49.921755] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:24:35.435 [2024-07-23 00:28:49.921770] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:24:35.435 [2024-07-23 00:28:49.921781] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:35.435 [2024-07-23 00:28:49.921806] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:24:35.435 [2024-07-23 00:28:49.922533] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:35.435 [2024-07-23 00:28:49.922563] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:24:35.435 [2024-07-23 00:28:49.922576] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.710 ms 00:24:35.435 [2024-07-23 00:28:49.922586] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:35.435 [2024-07-23 00:28:49.922841] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:35.435 [2024-07-23 00:28:49.922872] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:24:35.435 [2024-07-23 00:28:49.922890] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.227 ms 00:24:35.435 [2024-07-23 00:28:49.922910] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:35.435 [2024-07-23 00:28:49.924156] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:35.435 [2024-07-23 00:28:49.924206] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:24:35.435 [2024-07-23 00:28:49.924221] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.220 ms 00:24:35.435 [2024-07-23 00:28:49.924234] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:35.435 [2024-07-23 00:28:49.925285] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:35.435 [2024-07-23 00:28:49.925321] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:24:35.435 [2024-07-23 00:28:49.925336] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.001 ms 00:24:35.435 [2024-07-23 00:28:49.925354] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:35.435 [2024-07-23 00:28:49.926687] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:35.435 [2024-07-23 00:28:49.926728] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:24:35.435 [2024-07-23 00:28:49.926744] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.271 ms 00:24:35.435 [2024-07-23 00:28:49.926757] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:35.435 [2024-07-23 00:28:49.928025] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:35.435 [2024-07-23 00:28:49.928067] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:24:35.435 [2024-07-23 00:28:49.928083] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.235 ms 00:24:35.435 [2024-07-23 00:28:49.928103] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:35.435 [2024-07-23 00:28:49.928175] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:35.435 [2024-07-23 00:28:49.928190] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:24:35.435 [2024-07-23 00:28:49.928204] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.040 ms 00:24:35.435 [2024-07-23 00:28:49.928216] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:35.435 [2024-07-23 00:28:49.929615] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:35.435 [2024-07-23 00:28:49.929652] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: persist band info metadata 00:24:35.435 [2024-07-23 00:28:49.929666] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.376 ms 00:24:35.435 [2024-07-23 00:28:49.929679] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:35.435 [2024-07-23 00:28:49.930965] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:35.435 [2024-07-23 00:28:49.931007] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: persist trim metadata 00:24:35.435 [2024-07-23 00:28:49.931021] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.250 ms 00:24:35.435 [2024-07-23 00:28:49.931033] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:35.435 [2024-07-23 00:28:49.932335] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:35.435 [2024-07-23 00:28:49.932372] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:24:35.435 [2024-07-23 00:28:49.932387] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.269 ms 00:24:35.435 [2024-07-23 00:28:49.932399] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:35.435 [2024-07-23 00:28:49.933595] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:35.435 [2024-07-23 00:28:49.933632] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:24:35.435 [2024-07-23 00:28:49.933646] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.132 ms 00:24:35.435 [2024-07-23 00:28:49.933672] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:35.435 [2024-07-23 00:28:49.933707] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:24:35.435 [2024-07-23 00:28:49.933726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:24:35.435 [2024-07-23 00:28:49.933741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:24:35.435 [2024-07-23 00:28:49.933756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:24:35.435 [2024-07-23 00:28:49.933770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:24:35.435 [2024-07-23 00:28:49.933784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:24:35.435 [2024-07-23 00:28:49.933801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:24:35.435 [2024-07-23 00:28:49.933820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:24:35.435 [2024-07-23 00:28:49.933850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:24:35.435 [2024-07-23 00:28:49.933870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:24:35.435 [2024-07-23 00:28:49.933891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:24:35.435 [2024-07-23 00:28:49.933911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:24:35.435 [2024-07-23 00:28:49.933930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:24:35.435 [2024-07-23 00:28:49.933945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:24:35.435 [2024-07-23 00:28:49.933976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:24:35.435 [2024-07-23 00:28:49.933995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:24:35.435 [2024-07-23 00:28:49.934013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:24:35.435 [2024-07-23 00:28:49.934033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:24:35.435 [2024-07-23 00:28:49.934053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:24:35.435 [2024-07-23 00:28:49.934075] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:24:35.435 [2024-07-23 00:28:49.934107] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: dcbc8d67-06ed-4176-9b05-4dbd609a1a6c 00:24:35.435 [2024-07-23 00:28:49.934124] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:24:35.435 [2024-07-23 00:28:49.934146] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 320 00:24:35.435 [2024-07-23 00:28:49.934161] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 0 00:24:35.435 [2024-07-23 00:28:49.934178] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: inf 00:24:35.435 [2024-07-23 00:28:49.934196] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:24:35.435 [2024-07-23 00:28:49.934225] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:24:35.435 [2024-07-23 00:28:49.934243] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:24:35.436 [2024-07-23 00:28:49.934274] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:24:35.436 [2024-07-23 00:28:49.934288] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:24:35.436 [2024-07-23 00:28:49.934305] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:35.436 [2024-07-23 00:28:49.934319] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:24:35.436 [2024-07-23 00:28:49.934337] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.599 ms 00:24:35.436 [2024-07-23 00:28:49.934353] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:35.436 [2024-07-23 00:28:49.936139] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:35.436 [2024-07-23 00:28:49.936167] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:24:35.436 [2024-07-23 00:28:49.936181] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.754 ms 00:24:35.436 [2024-07-23 00:28:49.936217] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:35.436 [2024-07-23 00:28:49.936339] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:35.436 [2024-07-23 00:28:49.936354] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:24:35.436 [2024-07-23 00:28:49.936367] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.094 ms 00:24:35.436 [2024-07-23 00:28:49.936390] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:35.436 [2024-07-23 00:28:49.943462] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:24:35.436 [2024-07-23 00:28:49.943492] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:24:35.436 [2024-07-23 00:28:49.943504] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:24:35.436 [2024-07-23 00:28:49.943514] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:35.436 [2024-07-23 00:28:49.943561] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:24:35.436 [2024-07-23 00:28:49.943572] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:24:35.436 [2024-07-23 00:28:49.943583] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:24:35.436 [2024-07-23 00:28:49.943593] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:35.436 [2024-07-23 00:28:49.943673] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:24:35.436 [2024-07-23 00:28:49.943685] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:24:35.436 [2024-07-23 00:28:49.943696] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:24:35.436 [2024-07-23 00:28:49.943705] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:35.436 [2024-07-23 00:28:49.943732] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:24:35.436 [2024-07-23 00:28:49.943743] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:24:35.436 [2024-07-23 00:28:49.943753] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:24:35.436 [2024-07-23 00:28:49.943763] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:35.436 [2024-07-23 00:28:49.955553] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:24:35.436 [2024-07-23 00:28:49.955597] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:24:35.436 [2024-07-23 00:28:49.955610] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:24:35.436 [2024-07-23 00:28:49.955620] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:35.436 [2024-07-23 00:28:49.963985] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:24:35.436 [2024-07-23 00:28:49.964029] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:24:35.436 [2024-07-23 00:28:49.964059] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:24:35.436 [2024-07-23 00:28:49.964069] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:35.436 [2024-07-23 00:28:49.964150] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:24:35.436 [2024-07-23 00:28:49.964168] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:24:35.436 [2024-07-23 00:28:49.964178] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:24:35.436 [2024-07-23 00:28:49.964188] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:35.436 [2024-07-23 00:28:49.964224] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:24:35.436 [2024-07-23 00:28:49.964236] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:24:35.436 [2024-07-23 00:28:49.964246] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:24:35.436 [2024-07-23 00:28:49.964255] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:35.436 [2024-07-23 00:28:49.964342] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:24:35.436 [2024-07-23 00:28:49.964359] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:24:35.436 [2024-07-23 00:28:49.964369] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:24:35.436 [2024-07-23 00:28:49.964378] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:35.436 [2024-07-23 00:28:49.964413] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:24:35.436 [2024-07-23 00:28:49.964447] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:24:35.436 [2024-07-23 00:28:49.964462] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:24:35.436 [2024-07-23 00:28:49.964479] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:35.436 [2024-07-23 00:28:49.964537] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:24:35.436 [2024-07-23 00:28:49.964553] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:24:35.436 [2024-07-23 00:28:49.964576] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:24:35.436 [2024-07-23 00:28:49.964588] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:35.436 [2024-07-23 00:28:49.964647] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:24:35.436 [2024-07-23 00:28:49.964667] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:24:35.436 [2024-07-23 00:28:49.964685] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:24:35.436 [2024-07-23 00:28:49.964700] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:35.436 [2024-07-23 00:28:49.964888] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 43.191 ms, result 0 00:24:35.695 00:28:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:24:35.695 00:28:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@145 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:24:35.695 00:28:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@194 -- # tcp_initiator_cleanup 00:24:35.695 00:28:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@188 -- # tcp_initiator_shutdown 00:24:35.695 00:28:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@181 -- # [[ -n '' ]] 00:24:35.695 00:28:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@189 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:24:35.695 Remove shared memory files 00:24:35.695 00:28:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@15 -- # remove_shm 00:24:35.695 00:28:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:24:35.695 00:28:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:24:35.695 00:28:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:24:35.696 00:28:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid93967 00:24:35.696 00:28:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:24:35.696 00:28:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:24:35.696 ************************************ 00:24:35.696 END TEST ftl_upgrade_shutdown 00:24:35.696 ************************************ 00:24:35.696 00:24:35.696 real 1m7.682s 00:24:35.696 user 1m29.398s 00:24:35.696 sys 0m19.766s 00:24:35.696 00:28:50 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1122 -- # xtrace_disable 00:24:35.696 00:28:50 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:24:35.696 00:28:50 ftl -- ftl/ftl.sh@80 -- # [[ 1 -eq 1 ]] 00:24:35.696 00:28:50 ftl -- ftl/ftl.sh@81 -- # run_test ftl_restore_fast /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -f -c 0000:00:10.0 0000:00:11.0 00:24:35.696 00:28:50 ftl -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:24:35.696 00:28:50 ftl -- common/autotest_common.sh@1103 -- # xtrace_disable 00:24:35.696 00:28:50 ftl -- common/autotest_common.sh@10 -- # set +x 00:24:35.696 ************************************ 00:24:35.696 START TEST ftl_restore_fast 00:24:35.696 ************************************ 00:24:35.696 00:28:50 ftl.ftl_restore_fast -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -f -c 0000:00:10.0 0000:00:11.0 00:24:35.955 * Looking for test storage... 00:24:35.955 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:24:35.955 00:28:50 ftl.ftl_restore_fast -- ftl/restore.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:24:35.955 00:28:50 ftl.ftl_restore_fast -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh 00:24:35.955 00:28:50 ftl.ftl_restore_fast -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:24:35.955 00:28:50 ftl.ftl_restore_fast -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:24:35.955 00:28:50 ftl.ftl_restore_fast -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:24:35.955 00:28:50 ftl.ftl_restore_fast -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:24:35.955 00:28:50 ftl.ftl_restore_fast -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:35.955 00:28:50 ftl.ftl_restore_fast -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:24:35.955 00:28:50 ftl.ftl_restore_fast -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:24:35.955 00:28:50 ftl.ftl_restore_fast -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:24:35.955 00:28:50 ftl.ftl_restore_fast -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:24:35.955 00:28:50 ftl.ftl_restore_fast -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:24:35.955 00:28:50 ftl.ftl_restore_fast -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:24:35.955 00:28:50 ftl.ftl_restore_fast -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:24:35.955 00:28:50 ftl.ftl_restore_fast -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:24:35.955 00:28:50 ftl.ftl_restore_fast -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:24:35.955 00:28:50 ftl.ftl_restore_fast -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:24:35.955 00:28:50 ftl.ftl_restore_fast -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:24:35.955 00:28:50 ftl.ftl_restore_fast -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:24:35.955 00:28:50 ftl.ftl_restore_fast -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:24:35.955 00:28:50 ftl.ftl_restore_fast -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:24:35.955 00:28:50 ftl.ftl_restore_fast -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:24:35.955 00:28:50 ftl.ftl_restore_fast -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:24:35.955 00:28:50 ftl.ftl_restore_fast -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:24:35.955 00:28:50 ftl.ftl_restore_fast -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:24:35.955 00:28:50 ftl.ftl_restore_fast -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:24:35.955 00:28:50 ftl.ftl_restore_fast -- ftl/common.sh@23 -- # spdk_ini_pid= 00:24:35.955 00:28:50 ftl.ftl_restore_fast -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:24:35.955 00:28:50 ftl.ftl_restore_fast -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:24:35.955 00:28:50 ftl.ftl_restore_fast -- ftl/restore.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:35.955 00:28:50 ftl.ftl_restore_fast -- ftl/restore.sh@13 -- # mktemp -d 00:24:35.955 00:28:50 ftl.ftl_restore_fast -- ftl/restore.sh@13 -- # mount_dir=/tmp/tmp.hxu8gf6dkq 00:24:35.955 00:28:50 ftl.ftl_restore_fast -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:24:35.955 00:28:50 ftl.ftl_restore_fast -- ftl/restore.sh@16 -- # case $opt in 00:24:35.955 00:28:50 ftl.ftl_restore_fast -- ftl/restore.sh@19 -- # fast_shutdown=1 00:24:35.955 00:28:50 ftl.ftl_restore_fast -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:24:35.955 00:28:50 ftl.ftl_restore_fast -- ftl/restore.sh@16 -- # case $opt in 00:24:35.955 00:28:50 ftl.ftl_restore_fast -- ftl/restore.sh@18 -- # nv_cache=0000:00:10.0 00:24:35.955 00:28:50 ftl.ftl_restore_fast -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:24:35.955 00:28:50 ftl.ftl_restore_fast -- ftl/restore.sh@23 -- # shift 3 00:24:35.955 00:28:50 ftl.ftl_restore_fast -- ftl/restore.sh@24 -- # device=0000:00:11.0 00:24:35.955 00:28:50 ftl.ftl_restore_fast -- ftl/restore.sh@25 -- # timeout=240 00:24:35.955 00:28:50 ftl.ftl_restore_fast -- ftl/restore.sh@36 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:24:35.955 00:28:50 ftl.ftl_restore_fast -- ftl/restore.sh@39 -- # svcpid=94383 00:24:35.955 00:28:50 ftl.ftl_restore_fast -- ftl/restore.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:24:35.955 00:28:50 ftl.ftl_restore_fast -- ftl/restore.sh@41 -- # waitforlisten 94383 00:24:35.955 00:28:50 ftl.ftl_restore_fast -- common/autotest_common.sh@827 -- # '[' -z 94383 ']' 00:24:35.955 00:28:50 ftl.ftl_restore_fast -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:35.955 00:28:50 ftl.ftl_restore_fast -- common/autotest_common.sh@832 -- # local max_retries=100 00:24:35.955 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:35.955 00:28:50 ftl.ftl_restore_fast -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:35.955 00:28:50 ftl.ftl_restore_fast -- common/autotest_common.sh@836 -- # xtrace_disable 00:24:35.955 00:28:50 ftl.ftl_restore_fast -- common/autotest_common.sh@10 -- # set +x 00:24:35.955 [2024-07-23 00:28:50.570978] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:24:35.955 [2024-07-23 00:28:50.571120] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94383 ] 00:24:36.215 [2024-07-23 00:28:50.723574] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:36.215 [2024-07-23 00:28:50.765288] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:36.782 00:28:51 ftl.ftl_restore_fast -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:24:36.782 00:28:51 ftl.ftl_restore_fast -- common/autotest_common.sh@860 -- # return 0 00:24:36.782 00:28:51 ftl.ftl_restore_fast -- ftl/restore.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:24:36.782 00:28:51 ftl.ftl_restore_fast -- ftl/common.sh@54 -- # local name=nvme0 00:24:36.782 00:28:51 ftl.ftl_restore_fast -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:24:36.782 00:28:51 ftl.ftl_restore_fast -- ftl/common.sh@56 -- # local size=103424 00:24:36.782 00:28:51 ftl.ftl_restore_fast -- ftl/common.sh@59 -- # local base_bdev 00:24:36.782 00:28:51 ftl.ftl_restore_fast -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:24:37.041 00:28:51 ftl.ftl_restore_fast -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:24:37.041 00:28:51 ftl.ftl_restore_fast -- ftl/common.sh@62 -- # local base_size 00:24:37.041 00:28:51 ftl.ftl_restore_fast -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:24:37.041 00:28:51 ftl.ftl_restore_fast -- common/autotest_common.sh@1374 -- # local bdev_name=nvme0n1 00:24:37.041 00:28:51 ftl.ftl_restore_fast -- common/autotest_common.sh@1375 -- # local bdev_info 00:24:37.041 00:28:51 ftl.ftl_restore_fast -- common/autotest_common.sh@1376 -- # local bs 00:24:37.041 00:28:51 ftl.ftl_restore_fast -- common/autotest_common.sh@1377 -- # local nb 00:24:37.041 00:28:51 ftl.ftl_restore_fast -- common/autotest_common.sh@1378 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:24:37.300 00:28:51 ftl.ftl_restore_fast -- common/autotest_common.sh@1378 -- # bdev_info='[ 00:24:37.300 { 00:24:37.300 "name": "nvme0n1", 00:24:37.300 "aliases": [ 00:24:37.300 "7b080a78-0dab-486c-8656-3ea818795524" 00:24:37.300 ], 00:24:37.300 "product_name": "NVMe disk", 00:24:37.300 "block_size": 4096, 00:24:37.300 "num_blocks": 1310720, 00:24:37.300 "uuid": "7b080a78-0dab-486c-8656-3ea818795524", 00:24:37.300 "assigned_rate_limits": { 00:24:37.300 "rw_ios_per_sec": 0, 00:24:37.300 "rw_mbytes_per_sec": 0, 00:24:37.300 "r_mbytes_per_sec": 0, 00:24:37.300 "w_mbytes_per_sec": 0 00:24:37.300 }, 00:24:37.300 "claimed": true, 00:24:37.300 "claim_type": "read_many_write_one", 00:24:37.300 "zoned": false, 00:24:37.300 "supported_io_types": { 00:24:37.300 "read": true, 00:24:37.300 "write": true, 00:24:37.300 "unmap": true, 00:24:37.300 "write_zeroes": true, 00:24:37.300 "flush": true, 00:24:37.300 "reset": true, 00:24:37.300 "compare": true, 00:24:37.300 "compare_and_write": false, 00:24:37.300 "abort": true, 00:24:37.300 "nvme_admin": true, 00:24:37.300 "nvme_io": true 00:24:37.300 }, 00:24:37.300 "driver_specific": { 00:24:37.300 "nvme": [ 00:24:37.300 { 00:24:37.300 "pci_address": "0000:00:11.0", 00:24:37.300 "trid": { 00:24:37.300 "trtype": "PCIe", 00:24:37.300 "traddr": "0000:00:11.0" 00:24:37.300 }, 00:24:37.300 "ctrlr_data": { 00:24:37.300 "cntlid": 0, 00:24:37.300 "vendor_id": "0x1b36", 00:24:37.300 "model_number": "QEMU NVMe Ctrl", 00:24:37.300 "serial_number": "12341", 00:24:37.300 "firmware_revision": "8.0.0", 00:24:37.300 "subnqn": "nqn.2019-08.org.qemu:12341", 00:24:37.300 "oacs": { 00:24:37.300 "security": 0, 00:24:37.300 "format": 1, 00:24:37.300 "firmware": 0, 00:24:37.300 "ns_manage": 1 00:24:37.300 }, 00:24:37.300 "multi_ctrlr": false, 00:24:37.300 "ana_reporting": false 00:24:37.300 }, 00:24:37.300 "vs": { 00:24:37.300 "nvme_version": "1.4" 00:24:37.300 }, 00:24:37.300 "ns_data": { 00:24:37.300 "id": 1, 00:24:37.300 "can_share": false 00:24:37.300 } 00:24:37.300 } 00:24:37.300 ], 00:24:37.300 "mp_policy": "active_passive" 00:24:37.300 } 00:24:37.300 } 00:24:37.300 ]' 00:24:37.300 00:28:51 ftl.ftl_restore_fast -- common/autotest_common.sh@1379 -- # jq '.[] .block_size' 00:24:37.300 00:28:51 ftl.ftl_restore_fast -- common/autotest_common.sh@1379 -- # bs=4096 00:24:37.300 00:28:51 ftl.ftl_restore_fast -- common/autotest_common.sh@1380 -- # jq '.[] .num_blocks' 00:24:37.300 00:28:51 ftl.ftl_restore_fast -- common/autotest_common.sh@1380 -- # nb=1310720 00:24:37.300 00:28:51 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # bdev_size=5120 00:24:37.300 00:28:51 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # echo 5120 00:24:37.300 00:28:51 ftl.ftl_restore_fast -- ftl/common.sh@63 -- # base_size=5120 00:24:37.300 00:28:51 ftl.ftl_restore_fast -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:24:37.300 00:28:51 ftl.ftl_restore_fast -- ftl/common.sh@67 -- # clear_lvols 00:24:37.300 00:28:51 ftl.ftl_restore_fast -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:24:37.300 00:28:51 ftl.ftl_restore_fast -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:24:37.560 00:28:52 ftl.ftl_restore_fast -- ftl/common.sh@28 -- # stores=1a8ac480-b00a-4220-8bab-2f2caae71c69 00:24:37.560 00:28:52 ftl.ftl_restore_fast -- ftl/common.sh@29 -- # for lvs in $stores 00:24:37.560 00:28:52 ftl.ftl_restore_fast -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 1a8ac480-b00a-4220-8bab-2f2caae71c69 00:24:37.819 00:28:52 ftl.ftl_restore_fast -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:24:38.077 00:28:52 ftl.ftl_restore_fast -- ftl/common.sh@68 -- # lvs=0cf7e7c0-3e50-4cea-96c5-cae9c664ab97 00:24:38.077 00:28:52 ftl.ftl_restore_fast -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 0cf7e7c0-3e50-4cea-96c5-cae9c664ab97 00:24:38.077 00:28:52 ftl.ftl_restore_fast -- ftl/restore.sh@43 -- # split_bdev=21de0ee2-0491-417d-984c-ee9f915442d7 00:24:38.077 00:28:52 ftl.ftl_restore_fast -- ftl/restore.sh@44 -- # '[' -n 0000:00:10.0 ']' 00:24:38.077 00:28:52 ftl.ftl_restore_fast -- ftl/restore.sh@45 -- # create_nv_cache_bdev nvc0 0000:00:10.0 21de0ee2-0491-417d-984c-ee9f915442d7 00:24:38.077 00:28:52 ftl.ftl_restore_fast -- ftl/common.sh@35 -- # local name=nvc0 00:24:38.077 00:28:52 ftl.ftl_restore_fast -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:24:38.077 00:28:52 ftl.ftl_restore_fast -- ftl/common.sh@37 -- # local base_bdev=21de0ee2-0491-417d-984c-ee9f915442d7 00:24:38.077 00:28:52 ftl.ftl_restore_fast -- ftl/common.sh@38 -- # local cache_size= 00:24:38.077 00:28:52 ftl.ftl_restore_fast -- ftl/common.sh@41 -- # get_bdev_size 21de0ee2-0491-417d-984c-ee9f915442d7 00:24:38.077 00:28:52 ftl.ftl_restore_fast -- common/autotest_common.sh@1374 -- # local bdev_name=21de0ee2-0491-417d-984c-ee9f915442d7 00:24:38.077 00:28:52 ftl.ftl_restore_fast -- common/autotest_common.sh@1375 -- # local bdev_info 00:24:38.077 00:28:52 ftl.ftl_restore_fast -- common/autotest_common.sh@1376 -- # local bs 00:24:38.077 00:28:52 ftl.ftl_restore_fast -- common/autotest_common.sh@1377 -- # local nb 00:24:38.077 00:28:52 ftl.ftl_restore_fast -- common/autotest_common.sh@1378 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 21de0ee2-0491-417d-984c-ee9f915442d7 00:24:38.365 00:28:52 ftl.ftl_restore_fast -- common/autotest_common.sh@1378 -- # bdev_info='[ 00:24:38.365 { 00:24:38.365 "name": "21de0ee2-0491-417d-984c-ee9f915442d7", 00:24:38.365 "aliases": [ 00:24:38.365 "lvs/nvme0n1p0" 00:24:38.365 ], 00:24:38.365 "product_name": "Logical Volume", 00:24:38.365 "block_size": 4096, 00:24:38.365 "num_blocks": 26476544, 00:24:38.365 "uuid": "21de0ee2-0491-417d-984c-ee9f915442d7", 00:24:38.365 "assigned_rate_limits": { 00:24:38.365 "rw_ios_per_sec": 0, 00:24:38.365 "rw_mbytes_per_sec": 0, 00:24:38.365 "r_mbytes_per_sec": 0, 00:24:38.365 "w_mbytes_per_sec": 0 00:24:38.365 }, 00:24:38.365 "claimed": false, 00:24:38.365 "zoned": false, 00:24:38.365 "supported_io_types": { 00:24:38.365 "read": true, 00:24:38.365 "write": true, 00:24:38.365 "unmap": true, 00:24:38.365 "write_zeroes": true, 00:24:38.365 "flush": false, 00:24:38.365 "reset": true, 00:24:38.365 "compare": false, 00:24:38.365 "compare_and_write": false, 00:24:38.365 "abort": false, 00:24:38.365 "nvme_admin": false, 00:24:38.365 "nvme_io": false 00:24:38.365 }, 00:24:38.365 "driver_specific": { 00:24:38.365 "lvol": { 00:24:38.365 "lvol_store_uuid": "0cf7e7c0-3e50-4cea-96c5-cae9c664ab97", 00:24:38.365 "base_bdev": "nvme0n1", 00:24:38.365 "thin_provision": true, 00:24:38.365 "num_allocated_clusters": 0, 00:24:38.365 "snapshot": false, 00:24:38.365 "clone": false, 00:24:38.365 "esnap_clone": false 00:24:38.365 } 00:24:38.365 } 00:24:38.365 } 00:24:38.365 ]' 00:24:38.365 00:28:52 ftl.ftl_restore_fast -- common/autotest_common.sh@1379 -- # jq '.[] .block_size' 00:24:38.365 00:28:52 ftl.ftl_restore_fast -- common/autotest_common.sh@1379 -- # bs=4096 00:24:38.365 00:28:52 ftl.ftl_restore_fast -- common/autotest_common.sh@1380 -- # jq '.[] .num_blocks' 00:24:38.365 00:28:53 ftl.ftl_restore_fast -- common/autotest_common.sh@1380 -- # nb=26476544 00:24:38.365 00:28:53 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # bdev_size=103424 00:24:38.365 00:28:53 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # echo 103424 00:24:38.365 00:28:53 ftl.ftl_restore_fast -- ftl/common.sh@41 -- # local base_size=5171 00:24:38.365 00:28:53 ftl.ftl_restore_fast -- ftl/common.sh@44 -- # local nvc_bdev 00:24:38.365 00:28:53 ftl.ftl_restore_fast -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:24:38.624 00:28:53 ftl.ftl_restore_fast -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:24:38.624 00:28:53 ftl.ftl_restore_fast -- ftl/common.sh@47 -- # [[ -z '' ]] 00:24:38.624 00:28:53 ftl.ftl_restore_fast -- ftl/common.sh@48 -- # get_bdev_size 21de0ee2-0491-417d-984c-ee9f915442d7 00:24:38.624 00:28:53 ftl.ftl_restore_fast -- common/autotest_common.sh@1374 -- # local bdev_name=21de0ee2-0491-417d-984c-ee9f915442d7 00:24:38.624 00:28:53 ftl.ftl_restore_fast -- common/autotest_common.sh@1375 -- # local bdev_info 00:24:38.624 00:28:53 ftl.ftl_restore_fast -- common/autotest_common.sh@1376 -- # local bs 00:24:38.624 00:28:53 ftl.ftl_restore_fast -- common/autotest_common.sh@1377 -- # local nb 00:24:38.624 00:28:53 ftl.ftl_restore_fast -- common/autotest_common.sh@1378 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 21de0ee2-0491-417d-984c-ee9f915442d7 00:24:38.882 00:28:53 ftl.ftl_restore_fast -- common/autotest_common.sh@1378 -- # bdev_info='[ 00:24:38.882 { 00:24:38.882 "name": "21de0ee2-0491-417d-984c-ee9f915442d7", 00:24:38.882 "aliases": [ 00:24:38.882 "lvs/nvme0n1p0" 00:24:38.882 ], 00:24:38.882 "product_name": "Logical Volume", 00:24:38.882 "block_size": 4096, 00:24:38.882 "num_blocks": 26476544, 00:24:38.882 "uuid": "21de0ee2-0491-417d-984c-ee9f915442d7", 00:24:38.882 "assigned_rate_limits": { 00:24:38.882 "rw_ios_per_sec": 0, 00:24:38.882 "rw_mbytes_per_sec": 0, 00:24:38.882 "r_mbytes_per_sec": 0, 00:24:38.882 "w_mbytes_per_sec": 0 00:24:38.882 }, 00:24:38.882 "claimed": false, 00:24:38.882 "zoned": false, 00:24:38.882 "supported_io_types": { 00:24:38.882 "read": true, 00:24:38.882 "write": true, 00:24:38.882 "unmap": true, 00:24:38.882 "write_zeroes": true, 00:24:38.882 "flush": false, 00:24:38.882 "reset": true, 00:24:38.882 "compare": false, 00:24:38.882 "compare_and_write": false, 00:24:38.882 "abort": false, 00:24:38.882 "nvme_admin": false, 00:24:38.882 "nvme_io": false 00:24:38.882 }, 00:24:38.882 "driver_specific": { 00:24:38.882 "lvol": { 00:24:38.882 "lvol_store_uuid": "0cf7e7c0-3e50-4cea-96c5-cae9c664ab97", 00:24:38.882 "base_bdev": "nvme0n1", 00:24:38.882 "thin_provision": true, 00:24:38.882 "num_allocated_clusters": 0, 00:24:38.882 "snapshot": false, 00:24:38.882 "clone": false, 00:24:38.882 "esnap_clone": false 00:24:38.882 } 00:24:38.882 } 00:24:38.882 } 00:24:38.882 ]' 00:24:38.882 00:28:53 ftl.ftl_restore_fast -- common/autotest_common.sh@1379 -- # jq '.[] .block_size' 00:24:38.882 00:28:53 ftl.ftl_restore_fast -- common/autotest_common.sh@1379 -- # bs=4096 00:24:38.882 00:28:53 ftl.ftl_restore_fast -- common/autotest_common.sh@1380 -- # jq '.[] .num_blocks' 00:24:38.882 00:28:53 ftl.ftl_restore_fast -- common/autotest_common.sh@1380 -- # nb=26476544 00:24:38.882 00:28:53 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # bdev_size=103424 00:24:38.882 00:28:53 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # echo 103424 00:24:38.882 00:28:53 ftl.ftl_restore_fast -- ftl/common.sh@48 -- # cache_size=5171 00:24:38.882 00:28:53 ftl.ftl_restore_fast -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:24:39.140 00:28:53 ftl.ftl_restore_fast -- ftl/restore.sh@45 -- # nvc_bdev=nvc0n1p0 00:24:39.140 00:28:53 ftl.ftl_restore_fast -- ftl/restore.sh@48 -- # get_bdev_size 21de0ee2-0491-417d-984c-ee9f915442d7 00:24:39.140 00:28:53 ftl.ftl_restore_fast -- common/autotest_common.sh@1374 -- # local bdev_name=21de0ee2-0491-417d-984c-ee9f915442d7 00:24:39.140 00:28:53 ftl.ftl_restore_fast -- common/autotest_common.sh@1375 -- # local bdev_info 00:24:39.140 00:28:53 ftl.ftl_restore_fast -- common/autotest_common.sh@1376 -- # local bs 00:24:39.140 00:28:53 ftl.ftl_restore_fast -- common/autotest_common.sh@1377 -- # local nb 00:24:39.140 00:28:53 ftl.ftl_restore_fast -- common/autotest_common.sh@1378 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 21de0ee2-0491-417d-984c-ee9f915442d7 00:24:39.398 00:28:53 ftl.ftl_restore_fast -- common/autotest_common.sh@1378 -- # bdev_info='[ 00:24:39.398 { 00:24:39.398 "name": "21de0ee2-0491-417d-984c-ee9f915442d7", 00:24:39.398 "aliases": [ 00:24:39.398 "lvs/nvme0n1p0" 00:24:39.398 ], 00:24:39.398 "product_name": "Logical Volume", 00:24:39.398 "block_size": 4096, 00:24:39.398 "num_blocks": 26476544, 00:24:39.398 "uuid": "21de0ee2-0491-417d-984c-ee9f915442d7", 00:24:39.398 "assigned_rate_limits": { 00:24:39.398 "rw_ios_per_sec": 0, 00:24:39.398 "rw_mbytes_per_sec": 0, 00:24:39.398 "r_mbytes_per_sec": 0, 00:24:39.398 "w_mbytes_per_sec": 0 00:24:39.398 }, 00:24:39.398 "claimed": false, 00:24:39.398 "zoned": false, 00:24:39.398 "supported_io_types": { 00:24:39.398 "read": true, 00:24:39.398 "write": true, 00:24:39.398 "unmap": true, 00:24:39.398 "write_zeroes": true, 00:24:39.398 "flush": false, 00:24:39.398 "reset": true, 00:24:39.398 "compare": false, 00:24:39.398 "compare_and_write": false, 00:24:39.398 "abort": false, 00:24:39.398 "nvme_admin": false, 00:24:39.398 "nvme_io": false 00:24:39.398 }, 00:24:39.398 "driver_specific": { 00:24:39.398 "lvol": { 00:24:39.398 "lvol_store_uuid": "0cf7e7c0-3e50-4cea-96c5-cae9c664ab97", 00:24:39.398 "base_bdev": "nvme0n1", 00:24:39.398 "thin_provision": true, 00:24:39.398 "num_allocated_clusters": 0, 00:24:39.398 "snapshot": false, 00:24:39.398 "clone": false, 00:24:39.398 "esnap_clone": false 00:24:39.398 } 00:24:39.398 } 00:24:39.398 } 00:24:39.398 ]' 00:24:39.398 00:28:53 ftl.ftl_restore_fast -- common/autotest_common.sh@1379 -- # jq '.[] .block_size' 00:24:39.398 00:28:53 ftl.ftl_restore_fast -- common/autotest_common.sh@1379 -- # bs=4096 00:24:39.398 00:28:53 ftl.ftl_restore_fast -- common/autotest_common.sh@1380 -- # jq '.[] .num_blocks' 00:24:39.398 00:28:53 ftl.ftl_restore_fast -- common/autotest_common.sh@1380 -- # nb=26476544 00:24:39.398 00:28:53 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # bdev_size=103424 00:24:39.398 00:28:53 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # echo 103424 00:24:39.398 00:28:53 ftl.ftl_restore_fast -- ftl/restore.sh@48 -- # l2p_dram_size_mb=10 00:24:39.398 00:28:53 ftl.ftl_restore_fast -- ftl/restore.sh@49 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 21de0ee2-0491-417d-984c-ee9f915442d7 --l2p_dram_limit 10' 00:24:39.398 00:28:53 ftl.ftl_restore_fast -- ftl/restore.sh@51 -- # '[' -n '' ']' 00:24:39.398 00:28:53 ftl.ftl_restore_fast -- ftl/restore.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:24:39.398 00:28:53 ftl.ftl_restore_fast -- ftl/restore.sh@52 -- # ftl_construct_args+=' -c nvc0n1p0' 00:24:39.398 00:28:53 ftl.ftl_restore_fast -- ftl/restore.sh@54 -- # '[' 1 -eq 1 ']' 00:24:39.398 00:28:53 ftl.ftl_restore_fast -- ftl/restore.sh@55 -- # ftl_construct_args+=' --fast-shutdown' 00:24:39.398 00:28:53 ftl.ftl_restore_fast -- ftl/restore.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 21de0ee2-0491-417d-984c-ee9f915442d7 --l2p_dram_limit 10 -c nvc0n1p0 --fast-shutdown 00:24:39.658 [2024-07-23 00:28:54.161279] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:39.658 [2024-07-23 00:28:54.161331] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:24:39.658 [2024-07-23 00:28:54.161350] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:24:39.658 [2024-07-23 00:28:54.161361] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:39.658 [2024-07-23 00:28:54.161425] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:39.658 [2024-07-23 00:28:54.161437] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:39.658 [2024-07-23 00:28:54.161451] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:24:39.658 [2024-07-23 00:28:54.161466] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:39.658 [2024-07-23 00:28:54.161494] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:24:39.658 [2024-07-23 00:28:54.161783] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:24:39.658 [2024-07-23 00:28:54.161821] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:39.658 [2024-07-23 00:28:54.161843] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:39.658 [2024-07-23 00:28:54.161858] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.335 ms 00:24:39.658 [2024-07-23 00:28:54.161869] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:39.658 [2024-07-23 00:28:54.161958] mngt/ftl_mngt_md.c: 568:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 88705295-19ad-48c6-b0e8-453394f04b93 00:24:39.659 [2024-07-23 00:28:54.163366] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:39.659 [2024-07-23 00:28:54.163400] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:24:39.659 [2024-07-23 00:28:54.163413] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:24:39.659 [2024-07-23 00:28:54.163430] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:39.659 [2024-07-23 00:28:54.170851] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:39.659 [2024-07-23 00:28:54.170884] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:39.659 [2024-07-23 00:28:54.170896] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.392 ms 00:24:39.659 [2024-07-23 00:28:54.170909] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:39.659 [2024-07-23 00:28:54.171007] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:39.659 [2024-07-23 00:28:54.171028] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:39.659 [2024-07-23 00:28:54.171039] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:24:39.659 [2024-07-23 00:28:54.171052] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:39.659 [2024-07-23 00:28:54.171115] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:39.659 [2024-07-23 00:28:54.171130] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:24:39.659 [2024-07-23 00:28:54.171141] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:24:39.659 [2024-07-23 00:28:54.171154] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:39.659 [2024-07-23 00:28:54.171179] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:24:39.659 [2024-07-23 00:28:54.172992] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:39.659 [2024-07-23 00:28:54.173027] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:39.659 [2024-07-23 00:28:54.173042] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.820 ms 00:24:39.659 [2024-07-23 00:28:54.173052] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:39.659 [2024-07-23 00:28:54.173091] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:39.659 [2024-07-23 00:28:54.173104] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:24:39.659 [2024-07-23 00:28:54.173117] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:24:39.659 [2024-07-23 00:28:54.173127] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:39.659 [2024-07-23 00:28:54.173161] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:24:39.659 [2024-07-23 00:28:54.173324] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:24:39.659 [2024-07-23 00:28:54.173345] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:24:39.659 [2024-07-23 00:28:54.173359] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:24:39.659 [2024-07-23 00:28:54.173374] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:24:39.659 [2024-07-23 00:28:54.173387] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:24:39.659 [2024-07-23 00:28:54.173401] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:24:39.659 [2024-07-23 00:28:54.173412] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:24:39.659 [2024-07-23 00:28:54.173428] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:24:39.659 [2024-07-23 00:28:54.173437] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:24:39.659 [2024-07-23 00:28:54.173452] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:39.659 [2024-07-23 00:28:54.173462] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:24:39.659 [2024-07-23 00:28:54.173475] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.296 ms 00:24:39.659 [2024-07-23 00:28:54.173485] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:39.659 [2024-07-23 00:28:54.173558] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:39.659 [2024-07-23 00:28:54.173571] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:24:39.659 [2024-07-23 00:28:54.173587] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:24:39.659 [2024-07-23 00:28:54.173598] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:39.659 [2024-07-23 00:28:54.173683] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:24:39.659 [2024-07-23 00:28:54.173697] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:24:39.659 [2024-07-23 00:28:54.173710] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:39.659 [2024-07-23 00:28:54.173720] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:39.659 [2024-07-23 00:28:54.173733] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:24:39.659 [2024-07-23 00:28:54.173743] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:24:39.659 [2024-07-23 00:28:54.173757] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:24:39.659 [2024-07-23 00:28:54.173768] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:24:39.659 [2024-07-23 00:28:54.173780] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:24:39.659 [2024-07-23 00:28:54.173790] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:39.659 [2024-07-23 00:28:54.173802] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:24:39.659 [2024-07-23 00:28:54.173812] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:24:39.659 [2024-07-23 00:28:54.173824] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:39.659 [2024-07-23 00:28:54.173833] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:24:39.659 [2024-07-23 00:28:54.173849] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:24:39.659 [2024-07-23 00:28:54.173859] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:39.659 [2024-07-23 00:28:54.173870] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:24:39.659 [2024-07-23 00:28:54.173880] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:24:39.659 [2024-07-23 00:28:54.173892] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:39.659 [2024-07-23 00:28:54.173901] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:24:39.659 [2024-07-23 00:28:54.173913] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:24:39.659 [2024-07-23 00:28:54.173922] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:39.659 [2024-07-23 00:28:54.173933] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:24:39.659 [2024-07-23 00:28:54.173942] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:24:39.659 [2024-07-23 00:28:54.173954] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:39.659 [2024-07-23 00:28:54.173963] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:24:39.659 [2024-07-23 00:28:54.173975] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:24:39.659 [2024-07-23 00:28:54.173984] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:39.659 [2024-07-23 00:28:54.173997] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:24:39.659 [2024-07-23 00:28:54.174006] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:24:39.659 [2024-07-23 00:28:54.174020] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:39.659 [2024-07-23 00:28:54.174029] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:24:39.659 [2024-07-23 00:28:54.174042] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:24:39.659 [2024-07-23 00:28:54.174051] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:39.659 [2024-07-23 00:28:54.174062] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:24:39.659 [2024-07-23 00:28:54.174071] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:24:39.659 [2024-07-23 00:28:54.174083] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:39.659 [2024-07-23 00:28:54.174092] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:24:39.659 [2024-07-23 00:28:54.174103] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:24:39.659 [2024-07-23 00:28:54.174112] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:39.659 [2024-07-23 00:28:54.174123] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:24:39.659 [2024-07-23 00:28:54.174133] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:24:39.659 [2024-07-23 00:28:54.174145] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:39.659 [2024-07-23 00:28:54.174153] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:24:39.659 [2024-07-23 00:28:54.174173] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:24:39.659 [2024-07-23 00:28:54.174183] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:39.659 [2024-07-23 00:28:54.174199] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:39.659 [2024-07-23 00:28:54.174213] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:24:39.659 [2024-07-23 00:28:54.174225] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:24:39.659 [2024-07-23 00:28:54.174234] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:24:39.659 [2024-07-23 00:28:54.174246] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:24:39.659 [2024-07-23 00:28:54.174256] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:24:39.659 [2024-07-23 00:28:54.174308] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:24:39.659 [2024-07-23 00:28:54.174323] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:24:39.659 [2024-07-23 00:28:54.174340] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:39.659 [2024-07-23 00:28:54.174363] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:24:39.659 [2024-07-23 00:28:54.174377] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:24:39.659 [2024-07-23 00:28:54.174387] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:24:39.659 [2024-07-23 00:28:54.174400] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:24:39.659 [2024-07-23 00:28:54.174411] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:24:39.660 [2024-07-23 00:28:54.174423] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:24:39.660 [2024-07-23 00:28:54.174433] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:24:39.660 [2024-07-23 00:28:54.174448] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:24:39.660 [2024-07-23 00:28:54.174458] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:24:39.660 [2024-07-23 00:28:54.174482] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:24:39.660 [2024-07-23 00:28:54.174493] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:24:39.660 [2024-07-23 00:28:54.174509] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:24:39.660 [2024-07-23 00:28:54.174519] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:24:39.660 [2024-07-23 00:28:54.174532] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:24:39.660 [2024-07-23 00:28:54.174542] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:24:39.660 [2024-07-23 00:28:54.174556] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:39.660 [2024-07-23 00:28:54.174567] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:24:39.660 [2024-07-23 00:28:54.174580] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:24:39.660 [2024-07-23 00:28:54.174591] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:24:39.660 [2024-07-23 00:28:54.174604] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:24:39.660 [2024-07-23 00:28:54.174615] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:39.660 [2024-07-23 00:28:54.174628] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:24:39.660 [2024-07-23 00:28:54.174647] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.986 ms 00:24:39.660 [2024-07-23 00:28:54.174662] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:39.660 [2024-07-23 00:28:54.174706] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:24:39.660 [2024-07-23 00:28:54.174722] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:24:42.949 [2024-07-23 00:28:57.408165] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:42.949 [2024-07-23 00:28:57.408239] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:24:42.949 [2024-07-23 00:28:57.408256] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3238.705 ms 00:24:42.949 [2024-07-23 00:28:57.408280] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:42.949 [2024-07-23 00:28:57.419527] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:42.949 [2024-07-23 00:28:57.419597] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:42.949 [2024-07-23 00:28:57.419613] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.166 ms 00:24:42.949 [2024-07-23 00:28:57.419627] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:42.949 [2024-07-23 00:28:57.419742] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:42.949 [2024-07-23 00:28:57.419765] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:24:42.949 [2024-07-23 00:28:57.419776] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:24:42.949 [2024-07-23 00:28:57.419788] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:42.949 [2024-07-23 00:28:57.430201] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:42.949 [2024-07-23 00:28:57.430250] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:42.949 [2024-07-23 00:28:57.430285] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.376 ms 00:24:42.949 [2024-07-23 00:28:57.430299] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:42.949 [2024-07-23 00:28:57.430338] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:42.949 [2024-07-23 00:28:57.430352] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:42.949 [2024-07-23 00:28:57.430363] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:24:42.949 [2024-07-23 00:28:57.430376] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:42.949 [2024-07-23 00:28:57.430847] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:42.949 [2024-07-23 00:28:57.430878] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:42.949 [2024-07-23 00:28:57.430890] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.421 ms 00:24:42.949 [2024-07-23 00:28:57.430903] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:42.949 [2024-07-23 00:28:57.431010] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:42.949 [2024-07-23 00:28:57.431033] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:42.949 [2024-07-23 00:28:57.431045] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.085 ms 00:24:42.949 [2024-07-23 00:28:57.431058] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:42.949 [2024-07-23 00:28:57.438386] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:42.949 [2024-07-23 00:28:57.438426] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:42.949 [2024-07-23 00:28:57.438447] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.319 ms 00:24:42.949 [2024-07-23 00:28:57.438461] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:42.949 [2024-07-23 00:28:57.446406] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:24:42.949 [2024-07-23 00:28:57.449653] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:42.949 [2024-07-23 00:28:57.449684] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:24:42.949 [2024-07-23 00:28:57.449700] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.129 ms 00:24:42.949 [2024-07-23 00:28:57.449713] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:42.949 [2024-07-23 00:28:57.532934] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:42.949 [2024-07-23 00:28:57.533026] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:24:42.949 [2024-07-23 00:28:57.533045] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 83.320 ms 00:24:42.949 [2024-07-23 00:28:57.533059] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:42.949 [2024-07-23 00:28:57.533249] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:42.949 [2024-07-23 00:28:57.533274] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:24:42.949 [2024-07-23 00:28:57.533290] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.143 ms 00:24:42.949 [2024-07-23 00:28:57.533301] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:42.949 [2024-07-23 00:28:57.536945] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:42.949 [2024-07-23 00:28:57.537006] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:24:42.949 [2024-07-23 00:28:57.537022] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.613 ms 00:24:42.949 [2024-07-23 00:28:57.537036] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:42.949 [2024-07-23 00:28:57.539876] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:42.949 [2024-07-23 00:28:57.539911] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:24:42.949 [2024-07-23 00:28:57.539927] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.802 ms 00:24:42.949 [2024-07-23 00:28:57.539939] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:42.949 [2024-07-23 00:28:57.540217] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:42.949 [2024-07-23 00:28:57.540236] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:24:42.949 [2024-07-23 00:28:57.540251] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.241 ms 00:24:42.949 [2024-07-23 00:28:57.540274] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:42.949 [2024-07-23 00:28:57.577840] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:42.949 [2024-07-23 00:28:57.577881] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:24:42.949 [2024-07-23 00:28:57.577899] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.595 ms 00:24:42.949 [2024-07-23 00:28:57.577913] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:42.949 [2024-07-23 00:28:57.582181] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:42.949 [2024-07-23 00:28:57.582214] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:24:42.949 [2024-07-23 00:28:57.582230] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.232 ms 00:24:42.949 [2024-07-23 00:28:57.582242] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:42.949 [2024-07-23 00:28:57.585478] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:42.949 [2024-07-23 00:28:57.585507] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:24:42.949 [2024-07-23 00:28:57.585523] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.188 ms 00:24:42.949 [2024-07-23 00:28:57.585533] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:42.949 [2024-07-23 00:28:57.589117] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:42.949 [2024-07-23 00:28:57.589148] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:24:42.949 [2024-07-23 00:28:57.589163] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.547 ms 00:24:42.949 [2024-07-23 00:28:57.589173] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:42.949 [2024-07-23 00:28:57.589227] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:42.949 [2024-07-23 00:28:57.589241] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:24:42.949 [2024-07-23 00:28:57.589255] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:24:42.949 [2024-07-23 00:28:57.589277] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:42.949 [2024-07-23 00:28:57.589342] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:42.949 [2024-07-23 00:28:57.589354] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:24:42.949 [2024-07-23 00:28:57.589367] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:24:42.949 [2024-07-23 00:28:57.589377] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:42.949 [2024-07-23 00:28:57.590350] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3434.265 ms, result 0 00:24:42.949 { 00:24:42.949 "name": "ftl0", 00:24:42.949 "uuid": "88705295-19ad-48c6-b0e8-453394f04b93" 00:24:42.949 } 00:24:42.949 00:28:57 ftl.ftl_restore_fast -- ftl/restore.sh@61 -- # echo '{"subsystems": [' 00:24:42.949 00:28:57 ftl.ftl_restore_fast -- ftl/restore.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:24:43.208 00:28:57 ftl.ftl_restore_fast -- ftl/restore.sh@63 -- # echo ']}' 00:24:43.208 00:28:57 ftl.ftl_restore_fast -- ftl/restore.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:24:43.468 [2024-07-23 00:28:57.978515] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:43.468 [2024-07-23 00:28:57.978557] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:24:43.468 [2024-07-23 00:28:57.978571] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:24:43.468 [2024-07-23 00:28:57.978586] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:43.468 [2024-07-23 00:28:57.978610] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:24:43.468 [2024-07-23 00:28:57.979297] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:43.468 [2024-07-23 00:28:57.979310] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:24:43.468 [2024-07-23 00:28:57.979342] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.670 ms 00:24:43.468 [2024-07-23 00:28:57.979352] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:43.469 [2024-07-23 00:28:57.979564] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:43.469 [2024-07-23 00:28:57.979576] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:24:43.469 [2024-07-23 00:28:57.979589] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.189 ms 00:24:43.469 [2024-07-23 00:28:57.979599] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:43.469 [2024-07-23 00:28:57.982146] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:43.469 [2024-07-23 00:28:57.982166] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:24:43.469 [2024-07-23 00:28:57.982180] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.529 ms 00:24:43.469 [2024-07-23 00:28:57.982190] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:43.469 [2024-07-23 00:28:57.987036] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:43.469 [2024-07-23 00:28:57.987065] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:24:43.469 [2024-07-23 00:28:57.987078] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.831 ms 00:24:43.469 [2024-07-23 00:28:57.987087] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:43.469 [2024-07-23 00:28:57.988557] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:43.469 [2024-07-23 00:28:57.988590] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:24:43.469 [2024-07-23 00:28:57.988607] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.399 ms 00:24:43.469 [2024-07-23 00:28:57.988616] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:43.469 [2024-07-23 00:28:57.993151] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:43.469 [2024-07-23 00:28:57.993185] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:24:43.469 [2024-07-23 00:28:57.993201] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.504 ms 00:24:43.469 [2024-07-23 00:28:57.993211] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:43.469 [2024-07-23 00:28:57.993333] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:43.469 [2024-07-23 00:28:57.993358] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:24:43.469 [2024-07-23 00:28:57.993374] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.089 ms 00:24:43.469 [2024-07-23 00:28:57.993390] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:43.469 [2024-07-23 00:28:57.995098] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:43.469 [2024-07-23 00:28:57.995130] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:24:43.469 [2024-07-23 00:28:57.995144] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.678 ms 00:24:43.469 [2024-07-23 00:28:57.995153] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:43.469 [2024-07-23 00:28:57.996579] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:43.469 [2024-07-23 00:28:57.996609] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:24:43.469 [2024-07-23 00:28:57.996625] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.392 ms 00:24:43.469 [2024-07-23 00:28:57.996635] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:43.469 [2024-07-23 00:28:57.997861] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:43.469 [2024-07-23 00:28:57.997891] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:24:43.469 [2024-07-23 00:28:57.997905] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.194 ms 00:24:43.469 [2024-07-23 00:28:57.997914] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:43.469 [2024-07-23 00:28:57.998967] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:43.469 [2024-07-23 00:28:57.998997] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:24:43.469 [2024-07-23 00:28:57.999011] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.996 ms 00:24:43.469 [2024-07-23 00:28:57.999020] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:43.469 [2024-07-23 00:28:57.999051] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:24:43.469 [2024-07-23 00:28:57.999068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:24:43.469 [2024-07-23 00:28:57.999083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:24:43.469 [2024-07-23 00:28:57.999094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:24:43.469 [2024-07-23 00:28:57.999122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:24:43.469 [2024-07-23 00:28:57.999133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:24:43.469 [2024-07-23 00:28:57.999150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:24:43.469 [2024-07-23 00:28:57.999161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:24:43.469 [2024-07-23 00:28:57.999174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:24:43.469 [2024-07-23 00:28:57.999184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:24:43.469 [2024-07-23 00:28:57.999198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:24:43.469 [2024-07-23 00:28:57.999209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:24:43.469 [2024-07-23 00:28:57.999222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:24:43.469 [2024-07-23 00:28:57.999233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:24:43.469 [2024-07-23 00:28:57.999246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:24:43.469 [2024-07-23 00:28:57.999256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:24:43.469 [2024-07-23 00:28:57.999287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:24:43.469 [2024-07-23 00:28:57.999298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:24:43.469 [2024-07-23 00:28:57.999311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:24:43.469 [2024-07-23 00:28:57.999321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:24:43.469 [2024-07-23 00:28:57.999334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:24:43.469 [2024-07-23 00:28:57.999345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:24:43.469 [2024-07-23 00:28:57.999361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:24:43.469 [2024-07-23 00:28:57.999372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:24:43.469 [2024-07-23 00:28:57.999385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:24:43.469 [2024-07-23 00:28:57.999395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:24:43.469 [2024-07-23 00:28:57.999408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:24:43.469 [2024-07-23 00:28:57.999419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:24:43.469 [2024-07-23 00:28:57.999432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:24:43.469 [2024-07-23 00:28:57.999443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:24:43.469 [2024-07-23 00:28:57.999458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:24:43.469 [2024-07-23 00:28:57.999468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:24:43.469 [2024-07-23 00:28:57.999482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:24:43.469 [2024-07-23 00:28:57.999493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:24:43.469 [2024-07-23 00:28:57.999506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:24:43.469 [2024-07-23 00:28:57.999517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:24:43.469 [2024-07-23 00:28:57.999530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:24:43.469 [2024-07-23 00:28:57.999540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:24:43.469 [2024-07-23 00:28:57.999556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:24:43.469 [2024-07-23 00:28:57.999567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:24:43.469 [2024-07-23 00:28:57.999580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:24:43.469 [2024-07-23 00:28:57.999591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:24:43.469 [2024-07-23 00:28:57.999604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:24:43.469 [2024-07-23 00:28:57.999615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:24:43.469 [2024-07-23 00:28:57.999628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:24:43.469 [2024-07-23 00:28:57.999638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:24:43.469 [2024-07-23 00:28:57.999651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:24:43.469 [2024-07-23 00:28:57.999662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:24:43.469 [2024-07-23 00:28:57.999675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:24:43.469 [2024-07-23 00:28:57.999685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:24:43.469 [2024-07-23 00:28:57.999699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:24:43.469 [2024-07-23 00:28:57.999710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:24:43.469 [2024-07-23 00:28:57.999723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:24:43.469 [2024-07-23 00:28:57.999733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:24:43.469 [2024-07-23 00:28:57.999749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:24:43.469 [2024-07-23 00:28:57.999760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:24:43.469 [2024-07-23 00:28:57.999775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:24:43.470 [2024-07-23 00:28:57.999786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:24:43.470 [2024-07-23 00:28:57.999799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:24:43.470 [2024-07-23 00:28:57.999810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:24:43.470 [2024-07-23 00:28:57.999823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:24:43.470 [2024-07-23 00:28:57.999833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:24:43.470 [2024-07-23 00:28:57.999846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:24:43.470 [2024-07-23 00:28:57.999857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:24:43.470 [2024-07-23 00:28:57.999869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:24:43.470 [2024-07-23 00:28:57.999881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:24:43.470 [2024-07-23 00:28:57.999895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:24:43.470 [2024-07-23 00:28:57.999906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:24:43.470 [2024-07-23 00:28:57.999919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:24:43.470 [2024-07-23 00:28:57.999929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:24:43.470 [2024-07-23 00:28:57.999945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:24:43.470 [2024-07-23 00:28:57.999956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:24:43.470 [2024-07-23 00:28:57.999969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:24:43.470 [2024-07-23 00:28:57.999980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:24:43.470 [2024-07-23 00:28:57.999993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:24:43.470 [2024-07-23 00:28:58.000004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:24:43.470 [2024-07-23 00:28:58.000018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:24:43.470 [2024-07-23 00:28:58.000028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:24:43.470 [2024-07-23 00:28:58.000041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:24:43.470 [2024-07-23 00:28:58.000052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:24:43.470 [2024-07-23 00:28:58.000065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:24:43.470 [2024-07-23 00:28:58.000076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:24:43.470 [2024-07-23 00:28:58.000090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:24:43.470 [2024-07-23 00:28:58.000100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:24:43.470 [2024-07-23 00:28:58.000114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:24:43.470 [2024-07-23 00:28:58.000126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:24:43.470 [2024-07-23 00:28:58.000142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:24:43.470 [2024-07-23 00:28:58.000153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:24:43.470 [2024-07-23 00:28:58.000166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:24:43.470 [2024-07-23 00:28:58.000177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:24:43.470 [2024-07-23 00:28:58.000190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:24:43.470 [2024-07-23 00:28:58.000201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:24:43.470 [2024-07-23 00:28:58.000214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:24:43.470 [2024-07-23 00:28:58.000224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:24:43.470 [2024-07-23 00:28:58.000237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:24:43.470 [2024-07-23 00:28:58.000247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:24:43.470 [2024-07-23 00:28:58.000268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:24:43.470 [2024-07-23 00:28:58.000282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:24:43.470 [2024-07-23 00:28:58.000296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:24:43.470 [2024-07-23 00:28:58.000307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:24:43.470 [2024-07-23 00:28:58.000319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:24:43.470 [2024-07-23 00:28:58.000337] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:24:43.470 [2024-07-23 00:28:58.000352] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 88705295-19ad-48c6-b0e8-453394f04b93 00:24:43.470 [2024-07-23 00:28:58.000363] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:24:43.470 [2024-07-23 00:28:58.000375] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:24:43.470 [2024-07-23 00:28:58.000386] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:24:43.470 [2024-07-23 00:28:58.000409] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:24:43.470 [2024-07-23 00:28:58.000418] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:24:43.470 [2024-07-23 00:28:58.000430] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:24:43.470 [2024-07-23 00:28:58.000444] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:24:43.470 [2024-07-23 00:28:58.000455] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:24:43.470 [2024-07-23 00:28:58.000464] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:24:43.470 [2024-07-23 00:28:58.000476] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:43.470 [2024-07-23 00:28:58.000493] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:24:43.470 [2024-07-23 00:28:58.000506] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.429 ms 00:24:43.470 [2024-07-23 00:28:58.000516] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:43.470 [2024-07-23 00:28:58.002328] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:43.470 [2024-07-23 00:28:58.002345] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:24:43.470 [2024-07-23 00:28:58.002361] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.791 ms 00:24:43.470 [2024-07-23 00:28:58.002371] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:43.470 [2024-07-23 00:28:58.002492] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:43.470 [2024-07-23 00:28:58.002503] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:24:43.470 [2024-07-23 00:28:58.002516] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.084 ms 00:24:43.470 [2024-07-23 00:28:58.002525] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:43.470 [2024-07-23 00:28:58.009487] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:43.470 [2024-07-23 00:28:58.009512] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:43.470 [2024-07-23 00:28:58.009526] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:43.470 [2024-07-23 00:28:58.009538] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:43.470 [2024-07-23 00:28:58.009593] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:43.470 [2024-07-23 00:28:58.009605] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:43.470 [2024-07-23 00:28:58.009617] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:43.470 [2024-07-23 00:28:58.009627] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:43.470 [2024-07-23 00:28:58.009737] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:43.470 [2024-07-23 00:28:58.009750] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:43.470 [2024-07-23 00:28:58.009766] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:43.470 [2024-07-23 00:28:58.009776] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:43.470 [2024-07-23 00:28:58.009808] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:43.470 [2024-07-23 00:28:58.009818] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:43.470 [2024-07-23 00:28:58.009830] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:43.470 [2024-07-23 00:28:58.009840] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:43.470 [2024-07-23 00:28:58.021867] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:43.470 [2024-07-23 00:28:58.021910] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:43.470 [2024-07-23 00:28:58.021927] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:43.470 [2024-07-23 00:28:58.021937] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:43.470 [2024-07-23 00:28:58.030195] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:43.470 [2024-07-23 00:28:58.030228] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:43.470 [2024-07-23 00:28:58.030258] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:43.470 [2024-07-23 00:28:58.030268] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:43.470 [2024-07-23 00:28:58.030364] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:43.470 [2024-07-23 00:28:58.030378] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:43.470 [2024-07-23 00:28:58.030394] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:43.470 [2024-07-23 00:28:58.030414] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:43.470 [2024-07-23 00:28:58.030471] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:43.470 [2024-07-23 00:28:58.030492] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:43.470 [2024-07-23 00:28:58.030505] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:43.470 [2024-07-23 00:28:58.030514] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:43.470 [2024-07-23 00:28:58.030594] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:43.471 [2024-07-23 00:28:58.030622] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:43.471 [2024-07-23 00:28:58.030635] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:43.471 [2024-07-23 00:28:58.030645] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:43.471 [2024-07-23 00:28:58.030685] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:43.471 [2024-07-23 00:28:58.030696] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:24:43.471 [2024-07-23 00:28:58.030713] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:43.471 [2024-07-23 00:28:58.030729] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:43.471 [2024-07-23 00:28:58.030776] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:43.471 [2024-07-23 00:28:58.030787] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:43.471 [2024-07-23 00:28:58.030802] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:43.471 [2024-07-23 00:28:58.030812] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:43.471 [2024-07-23 00:28:58.030859] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:43.471 [2024-07-23 00:28:58.030873] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:43.471 [2024-07-23 00:28:58.030885] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:43.471 [2024-07-23 00:28:58.030895] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:43.471 [2024-07-23 00:28:58.031023] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 52.553 ms, result 0 00:24:43.471 true 00:24:43.471 00:28:58 ftl.ftl_restore_fast -- ftl/restore.sh@66 -- # killprocess 94383 00:24:43.471 00:28:58 ftl.ftl_restore_fast -- common/autotest_common.sh@946 -- # '[' -z 94383 ']' 00:24:43.471 00:28:58 ftl.ftl_restore_fast -- common/autotest_common.sh@950 -- # kill -0 94383 00:24:43.471 00:28:58 ftl.ftl_restore_fast -- common/autotest_common.sh@951 -- # uname 00:24:43.471 00:28:58 ftl.ftl_restore_fast -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:24:43.471 00:28:58 ftl.ftl_restore_fast -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 94383 00:24:43.471 00:28:58 ftl.ftl_restore_fast -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:24:43.471 killing process with pid 94383 00:24:43.471 00:28:58 ftl.ftl_restore_fast -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:24:43.471 00:28:58 ftl.ftl_restore_fast -- common/autotest_common.sh@964 -- # echo 'killing process with pid 94383' 00:24:43.471 00:28:58 ftl.ftl_restore_fast -- common/autotest_common.sh@965 -- # kill 94383 00:24:43.471 00:28:58 ftl.ftl_restore_fast -- common/autotest_common.sh@970 -- # wait 94383 00:24:46.755 00:29:00 ftl.ftl_restore_fast -- ftl/restore.sh@69 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile bs=4K count=256K 00:24:50.042 262144+0 records in 00:24:50.042 262144+0 records out 00:24:50.042 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 3.77231 s, 285 MB/s 00:24:50.042 00:29:04 ftl.ftl_restore_fast -- ftl/restore.sh@70 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:24:51.947 00:29:06 ftl.ftl_restore_fast -- ftl/restore.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:24:51.947 [2024-07-23 00:29:06.439525] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:24:51.947 [2024-07-23 00:29:06.439665] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94575 ] 00:24:51.947 [2024-07-23 00:29:06.589046] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:52.206 [2024-07-23 00:29:06.631318] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:52.206 [2024-07-23 00:29:06.732870] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:52.206 [2024-07-23 00:29:06.732944] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:52.206 [2024-07-23 00:29:06.883876] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:52.206 [2024-07-23 00:29:06.883928] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:24:52.206 [2024-07-23 00:29:06.883945] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:24:52.206 [2024-07-23 00:29:06.883962] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:52.206 [2024-07-23 00:29:06.884013] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:52.206 [2024-07-23 00:29:06.884025] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:52.206 [2024-07-23 00:29:06.884035] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:24:52.206 [2024-07-23 00:29:06.884048] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:52.206 [2024-07-23 00:29:06.884085] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:24:52.206 [2024-07-23 00:29:06.884331] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:24:52.206 [2024-07-23 00:29:06.884353] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:52.206 [2024-07-23 00:29:06.884373] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:52.206 [2024-07-23 00:29:06.884384] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.282 ms 00:24:52.206 [2024-07-23 00:29:06.884394] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:52.206 [2024-07-23 00:29:06.885987] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:24:52.466 [2024-07-23 00:29:06.888567] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:52.466 [2024-07-23 00:29:06.888605] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:24:52.466 [2024-07-23 00:29:06.888625] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.584 ms 00:24:52.466 [2024-07-23 00:29:06.888645] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:52.466 [2024-07-23 00:29:06.888710] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:52.466 [2024-07-23 00:29:06.888724] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:24:52.466 [2024-07-23 00:29:06.888738] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:24:52.466 [2024-07-23 00:29:06.888750] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:52.466 [2024-07-23 00:29:06.895446] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:52.466 [2024-07-23 00:29:06.895481] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:52.466 [2024-07-23 00:29:06.895501] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.656 ms 00:24:52.466 [2024-07-23 00:29:06.895511] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:52.466 [2024-07-23 00:29:06.895610] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:52.466 [2024-07-23 00:29:06.895632] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:52.466 [2024-07-23 00:29:06.895643] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.073 ms 00:24:52.466 [2024-07-23 00:29:06.895661] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:52.466 [2024-07-23 00:29:06.895723] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:52.466 [2024-07-23 00:29:06.895741] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:24:52.466 [2024-07-23 00:29:06.895758] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:24:52.466 [2024-07-23 00:29:06.895767] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:52.466 [2024-07-23 00:29:06.895801] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:24:52.466 [2024-07-23 00:29:06.897431] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:52.466 [2024-07-23 00:29:06.897467] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:52.466 [2024-07-23 00:29:06.897479] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.641 ms 00:24:52.466 [2024-07-23 00:29:06.897489] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:52.466 [2024-07-23 00:29:06.897522] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:52.466 [2024-07-23 00:29:06.897533] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:24:52.466 [2024-07-23 00:29:06.897546] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:24:52.466 [2024-07-23 00:29:06.897556] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:52.466 [2024-07-23 00:29:06.897578] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:24:52.466 [2024-07-23 00:29:06.897607] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:24:52.466 [2024-07-23 00:29:06.897647] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:24:52.466 [2024-07-23 00:29:06.897670] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:24:52.466 [2024-07-23 00:29:06.897756] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:24:52.466 [2024-07-23 00:29:06.897772] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:24:52.466 [2024-07-23 00:29:06.897787] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:24:52.466 [2024-07-23 00:29:06.897800] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:24:52.466 [2024-07-23 00:29:06.897811] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:24:52.466 [2024-07-23 00:29:06.897823] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:24:52.466 [2024-07-23 00:29:06.897833] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:24:52.466 [2024-07-23 00:29:06.897842] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:24:52.466 [2024-07-23 00:29:06.897852] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:24:52.466 [2024-07-23 00:29:06.897862] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:52.466 [2024-07-23 00:29:06.897871] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:24:52.466 [2024-07-23 00:29:06.897882] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.287 ms 00:24:52.466 [2024-07-23 00:29:06.897895] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:52.466 [2024-07-23 00:29:06.897962] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:52.466 [2024-07-23 00:29:06.897980] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:24:52.466 [2024-07-23 00:29:06.897990] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:24:52.466 [2024-07-23 00:29:06.898000] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:52.466 [2024-07-23 00:29:06.898091] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:24:52.466 [2024-07-23 00:29:06.898111] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:24:52.466 [2024-07-23 00:29:06.898121] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:52.466 [2024-07-23 00:29:06.898131] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:52.466 [2024-07-23 00:29:06.898145] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:24:52.466 [2024-07-23 00:29:06.898154] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:24:52.466 [2024-07-23 00:29:06.898164] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:24:52.466 [2024-07-23 00:29:06.898172] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:24:52.466 [2024-07-23 00:29:06.898182] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:24:52.466 [2024-07-23 00:29:06.898191] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:52.466 [2024-07-23 00:29:06.898201] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:24:52.466 [2024-07-23 00:29:06.898210] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:24:52.467 [2024-07-23 00:29:06.898220] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:52.467 [2024-07-23 00:29:06.898229] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:24:52.467 [2024-07-23 00:29:06.898238] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:24:52.467 [2024-07-23 00:29:06.898247] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:52.467 [2024-07-23 00:29:06.898270] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:24:52.467 [2024-07-23 00:29:06.898280] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:24:52.467 [2024-07-23 00:29:06.898290] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:52.467 [2024-07-23 00:29:06.898299] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:24:52.467 [2024-07-23 00:29:06.898308] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:24:52.467 [2024-07-23 00:29:06.898317] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:52.467 [2024-07-23 00:29:06.898326] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:24:52.467 [2024-07-23 00:29:06.898335] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:24:52.467 [2024-07-23 00:29:06.898344] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:52.467 [2024-07-23 00:29:06.898353] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:24:52.467 [2024-07-23 00:29:06.898362] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:24:52.467 [2024-07-23 00:29:06.898371] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:52.467 [2024-07-23 00:29:06.898380] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:24:52.467 [2024-07-23 00:29:06.898389] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:24:52.467 [2024-07-23 00:29:06.898398] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:52.467 [2024-07-23 00:29:06.898407] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:24:52.467 [2024-07-23 00:29:06.898421] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:24:52.467 [2024-07-23 00:29:06.898430] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:52.467 [2024-07-23 00:29:06.898439] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:24:52.467 [2024-07-23 00:29:06.898447] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:24:52.467 [2024-07-23 00:29:06.898456] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:52.467 [2024-07-23 00:29:06.898465] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:24:52.467 [2024-07-23 00:29:06.898475] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:24:52.467 [2024-07-23 00:29:06.898483] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:52.467 [2024-07-23 00:29:06.898492] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:24:52.467 [2024-07-23 00:29:06.898501] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:24:52.467 [2024-07-23 00:29:06.898510] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:52.467 [2024-07-23 00:29:06.898518] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:24:52.467 [2024-07-23 00:29:06.898536] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:24:52.467 [2024-07-23 00:29:06.898546] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:52.467 [2024-07-23 00:29:06.898562] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:52.467 [2024-07-23 00:29:06.898572] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:24:52.467 [2024-07-23 00:29:06.898585] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:24:52.467 [2024-07-23 00:29:06.898594] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:24:52.467 [2024-07-23 00:29:06.898603] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:24:52.467 [2024-07-23 00:29:06.898612] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:24:52.467 [2024-07-23 00:29:06.898622] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:24:52.467 [2024-07-23 00:29:06.898632] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:24:52.467 [2024-07-23 00:29:06.898644] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:52.467 [2024-07-23 00:29:06.898655] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:24:52.467 [2024-07-23 00:29:06.898665] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:24:52.467 [2024-07-23 00:29:06.898675] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:24:52.467 [2024-07-23 00:29:06.898686] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:24:52.467 [2024-07-23 00:29:06.898696] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:24:52.467 [2024-07-23 00:29:06.898705] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:24:52.467 [2024-07-23 00:29:06.898715] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:24:52.467 [2024-07-23 00:29:06.898725] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:24:52.467 [2024-07-23 00:29:06.898734] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:24:52.467 [2024-07-23 00:29:06.898747] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:24:52.467 [2024-07-23 00:29:06.898756] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:24:52.467 [2024-07-23 00:29:06.898767] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:24:52.467 [2024-07-23 00:29:06.898776] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:24:52.467 [2024-07-23 00:29:06.898787] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:24:52.467 [2024-07-23 00:29:06.898797] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:24:52.467 [2024-07-23 00:29:06.898808] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:52.467 [2024-07-23 00:29:06.898818] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:24:52.467 [2024-07-23 00:29:06.898829] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:24:52.467 [2024-07-23 00:29:06.898847] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:24:52.467 [2024-07-23 00:29:06.898859] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:24:52.467 [2024-07-23 00:29:06.898870] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:52.467 [2024-07-23 00:29:06.898881] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:24:52.467 [2024-07-23 00:29:06.898891] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.833 ms 00:24:52.467 [2024-07-23 00:29:06.898903] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:52.467 [2024-07-23 00:29:06.918722] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:52.467 [2024-07-23 00:29:06.918759] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:52.467 [2024-07-23 00:29:06.918773] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.806 ms 00:24:52.467 [2024-07-23 00:29:06.918784] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:52.467 [2024-07-23 00:29:06.918873] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:52.467 [2024-07-23 00:29:06.918891] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:24:52.467 [2024-07-23 00:29:06.918908] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:24:52.467 [2024-07-23 00:29:06.918921] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:52.467 [2024-07-23 00:29:06.930161] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:52.467 [2024-07-23 00:29:06.930205] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:52.467 [2024-07-23 00:29:06.930223] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.197 ms 00:24:52.467 [2024-07-23 00:29:06.930252] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:52.467 [2024-07-23 00:29:06.930322] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:52.467 [2024-07-23 00:29:06.930346] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:52.467 [2024-07-23 00:29:06.930360] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:24:52.467 [2024-07-23 00:29:06.930377] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:52.467 [2024-07-23 00:29:06.930861] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:52.467 [2024-07-23 00:29:06.930881] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:52.467 [2024-07-23 00:29:06.930899] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.422 ms 00:24:52.467 [2024-07-23 00:29:06.930909] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:52.467 [2024-07-23 00:29:06.931023] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:52.467 [2024-07-23 00:29:06.931039] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:52.467 [2024-07-23 00:29:06.931049] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.093 ms 00:24:52.467 [2024-07-23 00:29:06.931059] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:52.467 [2024-07-23 00:29:06.936879] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:52.467 [2024-07-23 00:29:06.936914] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:52.467 [2024-07-23 00:29:06.936926] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.806 ms 00:24:52.467 [2024-07-23 00:29:06.936936] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:52.467 [2024-07-23 00:29:06.939551] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:24:52.467 [2024-07-23 00:29:06.939586] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:24:52.467 [2024-07-23 00:29:06.939601] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:52.467 [2024-07-23 00:29:06.939611] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:24:52.468 [2024-07-23 00:29:06.939622] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.561 ms 00:24:52.468 [2024-07-23 00:29:06.939632] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:52.468 [2024-07-23 00:29:06.952209] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:52.468 [2024-07-23 00:29:06.952253] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:24:52.468 [2024-07-23 00:29:06.952277] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.559 ms 00:24:52.468 [2024-07-23 00:29:06.952296] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:52.468 [2024-07-23 00:29:06.953981] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:52.468 [2024-07-23 00:29:06.954015] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:24:52.468 [2024-07-23 00:29:06.954027] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.637 ms 00:24:52.468 [2024-07-23 00:29:06.954036] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:52.468 [2024-07-23 00:29:06.955479] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:52.468 [2024-07-23 00:29:06.955510] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:24:52.468 [2024-07-23 00:29:06.955522] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.410 ms 00:24:52.468 [2024-07-23 00:29:06.955531] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:52.468 [2024-07-23 00:29:06.955812] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:52.468 [2024-07-23 00:29:06.955835] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:24:52.468 [2024-07-23 00:29:06.955847] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.220 ms 00:24:52.468 [2024-07-23 00:29:06.955856] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:52.468 [2024-07-23 00:29:06.976721] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:52.468 [2024-07-23 00:29:06.976784] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:24:52.468 [2024-07-23 00:29:06.976801] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.867 ms 00:24:52.468 [2024-07-23 00:29:06.976830] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:52.468 [2024-07-23 00:29:06.983155] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:24:52.468 [2024-07-23 00:29:06.985786] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:52.468 [2024-07-23 00:29:06.985820] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:24:52.468 [2024-07-23 00:29:06.985834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.888 ms 00:24:52.468 [2024-07-23 00:29:06.985844] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:52.468 [2024-07-23 00:29:06.985898] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:52.468 [2024-07-23 00:29:06.985910] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:24:52.468 [2024-07-23 00:29:06.985921] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:24:52.468 [2024-07-23 00:29:06.985931] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:52.468 [2024-07-23 00:29:06.986007] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:52.468 [2024-07-23 00:29:06.986019] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:24:52.468 [2024-07-23 00:29:06.986032] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:24:52.468 [2024-07-23 00:29:06.986045] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:52.468 [2024-07-23 00:29:06.986065] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:52.468 [2024-07-23 00:29:06.986076] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:24:52.468 [2024-07-23 00:29:06.986086] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:24:52.468 [2024-07-23 00:29:06.986095] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:52.468 [2024-07-23 00:29:06.986126] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:24:52.468 [2024-07-23 00:29:06.986138] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:52.468 [2024-07-23 00:29:06.986148] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:24:52.468 [2024-07-23 00:29:06.986158] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:24:52.468 [2024-07-23 00:29:06.986187] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:52.468 [2024-07-23 00:29:06.989759] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:52.468 [2024-07-23 00:29:06.989793] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:24:52.468 [2024-07-23 00:29:06.989805] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.559 ms 00:24:52.468 [2024-07-23 00:29:06.989815] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:52.468 [2024-07-23 00:29:06.989877] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:52.468 [2024-07-23 00:29:06.989888] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:24:52.468 [2024-07-23 00:29:06.989898] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:24:52.468 [2024-07-23 00:29:06.989908] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:52.468 [2024-07-23 00:29:06.990973] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 106.881 ms, result 0 00:25:32.902  Copying: 26/1024 [MB] (26 MBps) Copying: 54/1024 [MB] (27 MBps) Copying: 81/1024 [MB] (27 MBps) Copying: 109/1024 [MB] (28 MBps) Copying: 136/1024 [MB] (26 MBps) Copying: 163/1024 [MB] (26 MBps) Copying: 189/1024 [MB] (25 MBps) Copying: 213/1024 [MB] (23 MBps) Copying: 239/1024 [MB] (25 MBps) Copying: 263/1024 [MB] (24 MBps) Copying: 288/1024 [MB] (24 MBps) Copying: 313/1024 [MB] (24 MBps) Copying: 338/1024 [MB] (24 MBps) Copying: 363/1024 [MB] (24 MBps) Copying: 388/1024 [MB] (24 MBps) Copying: 412/1024 [MB] (24 MBps) Copying: 437/1024 [MB] (24 MBps) Copying: 462/1024 [MB] (24 MBps) Copying: 486/1024 [MB] (24 MBps) Copying: 510/1024 [MB] (24 MBps) Copying: 535/1024 [MB] (24 MBps) Copying: 560/1024 [MB] (24 MBps) Copying: 583/1024 [MB] (23 MBps) Copying: 608/1024 [MB] (24 MBps) Copying: 633/1024 [MB] (25 MBps) Copying: 658/1024 [MB] (25 MBps) Copying: 684/1024 [MB] (25 MBps) Copying: 708/1024 [MB] (24 MBps) Copying: 733/1024 [MB] (25 MBps) Copying: 760/1024 [MB] (26 MBps) Copying: 785/1024 [MB] (25 MBps) Copying: 811/1024 [MB] (26 MBps) Copying: 838/1024 [MB] (26 MBps) Copying: 864/1024 [MB] (26 MBps) Copying: 889/1024 [MB] (24 MBps) Copying: 912/1024 [MB] (23 MBps) Copying: 938/1024 [MB] (25 MBps) Copying: 963/1024 [MB] (25 MBps) Copying: 989/1024 [MB] (25 MBps) Copying: 1015/1024 [MB] (25 MBps) Copying: 1024/1024 [MB] (average 25 MBps)[2024-07-23 00:29:47.279953] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:32.902 [2024-07-23 00:29:47.280013] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:25:32.902 [2024-07-23 00:29:47.280029] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:25:32.902 [2024-07-23 00:29:47.280045] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:32.902 [2024-07-23 00:29:47.280066] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:25:32.902 [2024-07-23 00:29:47.280739] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:32.902 [2024-07-23 00:29:47.280753] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:25:32.902 [2024-07-23 00:29:47.280764] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.660 ms 00:25:32.902 [2024-07-23 00:29:47.280782] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:32.902 [2024-07-23 00:29:47.282640] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:32.902 [2024-07-23 00:29:47.282678] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:25:32.902 [2024-07-23 00:29:47.282690] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.842 ms 00:25:32.902 [2024-07-23 00:29:47.282700] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:32.902 [2024-07-23 00:29:47.282731] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:32.902 [2024-07-23 00:29:47.282742] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:25:32.902 [2024-07-23 00:29:47.282753] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:25:32.902 [2024-07-23 00:29:47.282762] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:32.902 [2024-07-23 00:29:47.282805] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:32.902 [2024-07-23 00:29:47.282817] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:25:32.902 [2024-07-23 00:29:47.282826] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:25:32.902 [2024-07-23 00:29:47.282835] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:32.902 [2024-07-23 00:29:47.282850] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:25:32.902 [2024-07-23 00:29:47.282866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:25:32.902 [2024-07-23 00:29:47.282879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:25:32.902 [2024-07-23 00:29:47.282890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:25:32.902 [2024-07-23 00:29:47.282900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:25:32.902 [2024-07-23 00:29:47.282911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:25:32.902 [2024-07-23 00:29:47.282922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:25:32.902 [2024-07-23 00:29:47.282932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:25:32.902 [2024-07-23 00:29:47.282942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:25:32.902 [2024-07-23 00:29:47.282952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:25:32.902 [2024-07-23 00:29:47.282962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:25:32.902 [2024-07-23 00:29:47.282972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:25:32.902 [2024-07-23 00:29:47.282982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:25:32.902 [2024-07-23 00:29:47.282992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:25:32.902 [2024-07-23 00:29:47.283002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:25:32.902 [2024-07-23 00:29:47.283012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:25:32.902 [2024-07-23 00:29:47.283023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:25:32.902 [2024-07-23 00:29:47.283033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:25:32.902 [2024-07-23 00:29:47.283043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:25:32.902 [2024-07-23 00:29:47.283053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:25:32.902 [2024-07-23 00:29:47.283063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:25:32.902 [2024-07-23 00:29:47.283073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:25:32.902 [2024-07-23 00:29:47.283084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:25:32.902 [2024-07-23 00:29:47.283095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:25:32.902 [2024-07-23 00:29:47.283105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:25:32.902 [2024-07-23 00:29:47.283115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:25:32.902 [2024-07-23 00:29:47.283127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:25:32.902 [2024-07-23 00:29:47.283138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:25:32.902 [2024-07-23 00:29:47.283148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:25:32.902 [2024-07-23 00:29:47.283159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:25:32.902 [2024-07-23 00:29:47.283169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:25:32.902 [2024-07-23 00:29:47.283179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:25:32.902 [2024-07-23 00:29:47.283190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:25:32.902 [2024-07-23 00:29:47.283200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:25:32.902 [2024-07-23 00:29:47.283210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:25:32.902 [2024-07-23 00:29:47.283221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:25:32.902 [2024-07-23 00:29:47.283230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:25:32.902 [2024-07-23 00:29:47.283240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:25:32.902 [2024-07-23 00:29:47.283251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:25:32.902 [2024-07-23 00:29:47.283273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:25:32.902 [2024-07-23 00:29:47.283283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:25:32.902 [2024-07-23 00:29:47.283294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:25:32.902 [2024-07-23 00:29:47.283304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:25:32.902 [2024-07-23 00:29:47.283314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:25:32.902 [2024-07-23 00:29:47.283324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:25:32.902 [2024-07-23 00:29:47.283335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:25:32.902 [2024-07-23 00:29:47.283345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:25:32.902 [2024-07-23 00:29:47.283356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:25:32.902 [2024-07-23 00:29:47.283366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:25:32.902 [2024-07-23 00:29:47.283376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:25:32.902 [2024-07-23 00:29:47.283388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:25:32.903 [2024-07-23 00:29:47.283398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:25:32.903 [2024-07-23 00:29:47.283408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:25:32.903 [2024-07-23 00:29:47.283419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:25:32.903 [2024-07-23 00:29:47.283430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:25:32.903 [2024-07-23 00:29:47.283439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:25:32.903 [2024-07-23 00:29:47.283450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:25:32.903 [2024-07-23 00:29:47.283460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:25:32.903 [2024-07-23 00:29:47.283471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:25:32.903 [2024-07-23 00:29:47.283481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:25:32.903 [2024-07-23 00:29:47.283491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:25:32.903 [2024-07-23 00:29:47.283502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:25:32.903 [2024-07-23 00:29:47.283512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:25:32.903 [2024-07-23 00:29:47.283521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:25:32.903 [2024-07-23 00:29:47.283532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:25:32.903 [2024-07-23 00:29:47.283541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:25:32.903 [2024-07-23 00:29:47.283552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:25:32.903 [2024-07-23 00:29:47.283562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:25:32.903 [2024-07-23 00:29:47.283572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:25:32.903 [2024-07-23 00:29:47.283582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:25:32.903 [2024-07-23 00:29:47.283592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:25:32.903 [2024-07-23 00:29:47.283602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:25:32.903 [2024-07-23 00:29:47.283612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:25:32.903 [2024-07-23 00:29:47.283622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:25:32.903 [2024-07-23 00:29:47.283633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:25:32.903 [2024-07-23 00:29:47.283644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:25:32.903 [2024-07-23 00:29:47.283654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:25:32.903 [2024-07-23 00:29:47.283666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:25:32.903 [2024-07-23 00:29:47.283676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:25:32.903 [2024-07-23 00:29:47.283686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:25:32.903 [2024-07-23 00:29:47.283697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:25:32.903 [2024-07-23 00:29:47.283707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:25:32.903 [2024-07-23 00:29:47.283717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:25:32.903 [2024-07-23 00:29:47.283727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:25:32.903 [2024-07-23 00:29:47.283737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:25:32.903 [2024-07-23 00:29:47.283759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:25:32.903 [2024-07-23 00:29:47.283770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:25:32.903 [2024-07-23 00:29:47.283780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:25:32.903 [2024-07-23 00:29:47.283791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:25:32.903 [2024-07-23 00:29:47.283801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:25:32.903 [2024-07-23 00:29:47.283813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:25:32.903 [2024-07-23 00:29:47.283824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:25:32.903 [2024-07-23 00:29:47.283834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:25:32.903 [2024-07-23 00:29:47.283845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:25:32.903 [2024-07-23 00:29:47.283855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:25:32.903 [2024-07-23 00:29:47.283866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:25:32.903 [2024-07-23 00:29:47.283876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:25:32.903 [2024-07-23 00:29:47.283886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:25:32.903 [2024-07-23 00:29:47.283896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:25:32.903 [2024-07-23 00:29:47.283906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:25:32.903 [2024-07-23 00:29:47.283917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:25:32.903 [2024-07-23 00:29:47.283933] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:25:32.903 [2024-07-23 00:29:47.283943] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 88705295-19ad-48c6-b0e8-453394f04b93 00:25:32.903 [2024-07-23 00:29:47.283953] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:25:32.903 [2024-07-23 00:29:47.283962] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 32 00:25:32.903 [2024-07-23 00:29:47.283971] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:25:32.903 [2024-07-23 00:29:47.283980] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:25:32.903 [2024-07-23 00:29:47.283989] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:25:32.903 [2024-07-23 00:29:47.283999] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:25:32.903 [2024-07-23 00:29:47.284009] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:25:32.903 [2024-07-23 00:29:47.284017] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:25:32.903 [2024-07-23 00:29:47.284026] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:25:32.903 [2024-07-23 00:29:47.284035] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:32.903 [2024-07-23 00:29:47.284045] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:25:32.903 [2024-07-23 00:29:47.284058] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.188 ms 00:25:32.903 [2024-07-23 00:29:47.284067] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:32.903 [2024-07-23 00:29:47.285707] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:32.903 [2024-07-23 00:29:47.285730] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:25:32.903 [2024-07-23 00:29:47.285741] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.627 ms 00:25:32.903 [2024-07-23 00:29:47.285751] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:32.903 [2024-07-23 00:29:47.285875] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:32.903 [2024-07-23 00:29:47.285891] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:25:32.903 [2024-07-23 00:29:47.285911] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.083 ms 00:25:32.903 [2024-07-23 00:29:47.285921] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:32.903 [2024-07-23 00:29:47.291908] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:32.903 [2024-07-23 00:29:47.291933] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:32.903 [2024-07-23 00:29:47.291945] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:32.903 [2024-07-23 00:29:47.291954] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:32.903 [2024-07-23 00:29:47.292001] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:32.903 [2024-07-23 00:29:47.292016] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:32.903 [2024-07-23 00:29:47.292026] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:32.903 [2024-07-23 00:29:47.292035] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:32.903 [2024-07-23 00:29:47.292063] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:32.903 [2024-07-23 00:29:47.292082] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:32.903 [2024-07-23 00:29:47.292091] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:32.903 [2024-07-23 00:29:47.292105] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:32.903 [2024-07-23 00:29:47.292120] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:32.903 [2024-07-23 00:29:47.292131] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:32.903 [2024-07-23 00:29:47.292143] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:32.903 [2024-07-23 00:29:47.292152] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:32.903 [2024-07-23 00:29:47.303543] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:32.903 [2024-07-23 00:29:47.303587] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:32.903 [2024-07-23 00:29:47.303600] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:32.903 [2024-07-23 00:29:47.303626] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:32.903 [2024-07-23 00:29:47.311737] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:32.903 [2024-07-23 00:29:47.311778] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:32.903 [2024-07-23 00:29:47.311797] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:32.903 [2024-07-23 00:29:47.311830] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:32.904 [2024-07-23 00:29:47.311881] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:32.904 [2024-07-23 00:29:47.311892] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:32.904 [2024-07-23 00:29:47.311910] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:32.904 [2024-07-23 00:29:47.311920] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:32.904 [2024-07-23 00:29:47.311943] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:32.904 [2024-07-23 00:29:47.311954] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:32.904 [2024-07-23 00:29:47.311964] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:32.904 [2024-07-23 00:29:47.311977] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:32.904 [2024-07-23 00:29:47.312032] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:32.904 [2024-07-23 00:29:47.312044] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:32.904 [2024-07-23 00:29:47.312055] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:32.904 [2024-07-23 00:29:47.312064] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:32.904 [2024-07-23 00:29:47.312090] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:32.904 [2024-07-23 00:29:47.312102] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:25:32.904 [2024-07-23 00:29:47.312112] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:32.904 [2024-07-23 00:29:47.312121] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:32.904 [2024-07-23 00:29:47.312158] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:32.904 [2024-07-23 00:29:47.312169] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:32.904 [2024-07-23 00:29:47.312179] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:32.904 [2024-07-23 00:29:47.312196] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:32.904 [2024-07-23 00:29:47.312242] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:32.904 [2024-07-23 00:29:47.312255] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:32.904 [2024-07-23 00:29:47.312265] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:32.904 [2024-07-23 00:29:47.312289] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:32.904 [2024-07-23 00:29:47.312405] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 32.469 ms, result 0 00:25:33.473 00:25:33.473 00:25:33.473 00:29:47 ftl.ftl_restore_fast -- ftl/restore.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --count=262144 00:25:33.473 [2024-07-23 00:29:48.001451] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:25:33.473 [2024-07-23 00:29:48.001619] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94993 ] 00:25:33.473 [2024-07-23 00:29:48.153479] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:33.732 [2024-07-23 00:29:48.194916] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:33.732 [2024-07-23 00:29:48.296111] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:33.732 [2024-07-23 00:29:48.296194] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:33.992 [2024-07-23 00:29:48.446591] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:33.992 [2024-07-23 00:29:48.446641] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:25:33.992 [2024-07-23 00:29:48.446672] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:25:33.992 [2024-07-23 00:29:48.446683] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:33.992 [2024-07-23 00:29:48.446726] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:33.992 [2024-07-23 00:29:48.446737] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:33.992 [2024-07-23 00:29:48.446747] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:25:33.992 [2024-07-23 00:29:48.446761] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:33.992 [2024-07-23 00:29:48.446781] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:25:33.992 [2024-07-23 00:29:48.447064] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:25:33.992 [2024-07-23 00:29:48.447083] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:33.992 [2024-07-23 00:29:48.447096] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:33.992 [2024-07-23 00:29:48.447115] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.306 ms 00:25:33.992 [2024-07-23 00:29:48.447124] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:33.992 [2024-07-23 00:29:48.447481] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 1, shm_clean 1 00:25:33.992 [2024-07-23 00:29:48.447508] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:33.992 [2024-07-23 00:29:48.447528] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:25:33.992 [2024-07-23 00:29:48.447544] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:25:33.992 [2024-07-23 00:29:48.447557] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:33.992 [2024-07-23 00:29:48.447614] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:33.992 [2024-07-23 00:29:48.447636] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:25:33.992 [2024-07-23 00:29:48.447646] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:25:33.992 [2024-07-23 00:29:48.447655] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:33.992 [2024-07-23 00:29:48.448031] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:33.992 [2024-07-23 00:29:48.448051] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:33.993 [2024-07-23 00:29:48.448076] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.329 ms 00:25:33.993 [2024-07-23 00:29:48.448089] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:33.993 [2024-07-23 00:29:48.448172] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:33.993 [2024-07-23 00:29:48.448191] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:33.993 [2024-07-23 00:29:48.448201] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:25:33.993 [2024-07-23 00:29:48.448211] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:33.993 [2024-07-23 00:29:48.448236] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:33.993 [2024-07-23 00:29:48.448247] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:25:33.993 [2024-07-23 00:29:48.448257] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:25:33.993 [2024-07-23 00:29:48.448287] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:33.993 [2024-07-23 00:29:48.448311] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:25:33.993 [2024-07-23 00:29:48.450059] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:33.993 [2024-07-23 00:29:48.450080] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:33.993 [2024-07-23 00:29:48.450091] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.755 ms 00:25:33.993 [2024-07-23 00:29:48.450101] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:33.993 [2024-07-23 00:29:48.450131] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:33.993 [2024-07-23 00:29:48.450144] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:25:33.993 [2024-07-23 00:29:48.450154] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:25:33.993 [2024-07-23 00:29:48.450163] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:33.993 [2024-07-23 00:29:48.450186] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:25:33.993 [2024-07-23 00:29:48.450208] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:25:33.993 [2024-07-23 00:29:48.450247] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:25:33.993 [2024-07-23 00:29:48.450276] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:25:33.993 [2024-07-23 00:29:48.450359] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:25:33.993 [2024-07-23 00:29:48.450372] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:25:33.993 [2024-07-23 00:29:48.450385] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:25:33.993 [2024-07-23 00:29:48.450401] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:25:33.993 [2024-07-23 00:29:48.450413] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:25:33.993 [2024-07-23 00:29:48.450423] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:25:33.993 [2024-07-23 00:29:48.450432] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:25:33.993 [2024-07-23 00:29:48.450441] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:25:33.993 [2024-07-23 00:29:48.450451] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:25:33.993 [2024-07-23 00:29:48.450471] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:33.993 [2024-07-23 00:29:48.450480] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:25:33.993 [2024-07-23 00:29:48.450491] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.284 ms 00:25:33.993 [2024-07-23 00:29:48.450500] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:33.993 [2024-07-23 00:29:48.450565] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:33.993 [2024-07-23 00:29:48.450578] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:25:33.993 [2024-07-23 00:29:48.450588] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:25:33.993 [2024-07-23 00:29:48.450597] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:33.993 [2024-07-23 00:29:48.450691] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:25:33.993 [2024-07-23 00:29:48.450704] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:25:33.993 [2024-07-23 00:29:48.450722] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:33.993 [2024-07-23 00:29:48.450732] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:33.993 [2024-07-23 00:29:48.450742] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:25:33.993 [2024-07-23 00:29:48.450751] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:25:33.993 [2024-07-23 00:29:48.450760] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:25:33.993 [2024-07-23 00:29:48.450769] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:25:33.993 [2024-07-23 00:29:48.450781] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:25:33.993 [2024-07-23 00:29:48.450791] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:33.993 [2024-07-23 00:29:48.450802] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:25:33.993 [2024-07-23 00:29:48.450811] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:25:33.993 [2024-07-23 00:29:48.450819] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:33.993 [2024-07-23 00:29:48.450828] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:25:33.993 [2024-07-23 00:29:48.450837] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:25:33.993 [2024-07-23 00:29:48.450846] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:33.993 [2024-07-23 00:29:48.450855] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:25:33.993 [2024-07-23 00:29:48.450864] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:25:33.993 [2024-07-23 00:29:48.450872] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:33.993 [2024-07-23 00:29:48.450882] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:25:33.993 [2024-07-23 00:29:48.450891] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:25:33.993 [2024-07-23 00:29:48.450900] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:33.993 [2024-07-23 00:29:48.450909] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:25:33.993 [2024-07-23 00:29:48.450918] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:25:33.993 [2024-07-23 00:29:48.450929] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:33.993 [2024-07-23 00:29:48.450938] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:25:33.993 [2024-07-23 00:29:48.450947] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:25:33.993 [2024-07-23 00:29:48.450955] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:33.993 [2024-07-23 00:29:48.450964] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:25:33.993 [2024-07-23 00:29:48.450973] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:25:33.993 [2024-07-23 00:29:48.450982] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:33.993 [2024-07-23 00:29:48.450990] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:25:33.993 [2024-07-23 00:29:48.450999] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:25:33.993 [2024-07-23 00:29:48.451007] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:33.993 [2024-07-23 00:29:48.451016] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:25:33.993 [2024-07-23 00:29:48.451025] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:25:33.993 [2024-07-23 00:29:48.451033] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:33.993 [2024-07-23 00:29:48.451042] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:25:33.993 [2024-07-23 00:29:48.451051] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:25:33.993 [2024-07-23 00:29:48.451059] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:33.993 [2024-07-23 00:29:48.451076] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:25:33.993 [2024-07-23 00:29:48.451085] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:25:33.993 [2024-07-23 00:29:48.451094] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:33.993 [2024-07-23 00:29:48.451103] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:25:33.993 [2024-07-23 00:29:48.451113] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:25:33.993 [2024-07-23 00:29:48.451125] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:33.993 [2024-07-23 00:29:48.451134] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:33.993 [2024-07-23 00:29:48.451151] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:25:33.993 [2024-07-23 00:29:48.451160] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:25:33.993 [2024-07-23 00:29:48.451169] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:25:33.993 [2024-07-23 00:29:48.451178] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:25:33.993 [2024-07-23 00:29:48.451187] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:25:33.993 [2024-07-23 00:29:48.451196] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:25:33.993 [2024-07-23 00:29:48.451206] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:25:33.993 [2024-07-23 00:29:48.451217] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:33.993 [2024-07-23 00:29:48.451228] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:25:33.993 [2024-07-23 00:29:48.451241] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:25:33.993 [2024-07-23 00:29:48.451251] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:25:33.993 [2024-07-23 00:29:48.451274] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:25:33.993 [2024-07-23 00:29:48.451285] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:25:33.993 [2024-07-23 00:29:48.451295] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:25:33.994 [2024-07-23 00:29:48.451305] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:25:33.994 [2024-07-23 00:29:48.451315] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:25:33.994 [2024-07-23 00:29:48.451325] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:25:33.994 [2024-07-23 00:29:48.451335] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:25:33.994 [2024-07-23 00:29:48.451345] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:25:33.994 [2024-07-23 00:29:48.451355] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:25:33.994 [2024-07-23 00:29:48.451365] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:25:33.994 [2024-07-23 00:29:48.451375] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:25:33.994 [2024-07-23 00:29:48.451385] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:25:33.994 [2024-07-23 00:29:48.451396] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:33.994 [2024-07-23 00:29:48.451407] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:25:33.994 [2024-07-23 00:29:48.451419] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:25:33.994 [2024-07-23 00:29:48.451430] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:25:33.994 [2024-07-23 00:29:48.451441] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:25:33.994 [2024-07-23 00:29:48.451459] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:33.994 [2024-07-23 00:29:48.451470] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:25:33.994 [2024-07-23 00:29:48.451480] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.821 ms 00:25:33.994 [2024-07-23 00:29:48.451489] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:33.994 [2024-07-23 00:29:48.468309] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:33.994 [2024-07-23 00:29:48.468346] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:33.994 [2024-07-23 00:29:48.468363] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.804 ms 00:25:33.994 [2024-07-23 00:29:48.468376] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:33.994 [2024-07-23 00:29:48.468467] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:33.994 [2024-07-23 00:29:48.468481] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:25:33.994 [2024-07-23 00:29:48.468494] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:25:33.994 [2024-07-23 00:29:48.468507] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:33.994 [2024-07-23 00:29:48.479190] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:33.994 [2024-07-23 00:29:48.479224] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:33.994 [2024-07-23 00:29:48.479236] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.639 ms 00:25:33.994 [2024-07-23 00:29:48.479247] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:33.994 [2024-07-23 00:29:48.479293] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:33.994 [2024-07-23 00:29:48.479305] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:33.994 [2024-07-23 00:29:48.479319] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:25:33.994 [2024-07-23 00:29:48.479329] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:33.994 [2024-07-23 00:29:48.479425] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:33.994 [2024-07-23 00:29:48.479437] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:33.994 [2024-07-23 00:29:48.479457] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:25:33.994 [2024-07-23 00:29:48.479466] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:33.994 [2024-07-23 00:29:48.479575] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:33.994 [2024-07-23 00:29:48.479587] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:33.994 [2024-07-23 00:29:48.479597] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.090 ms 00:25:33.994 [2024-07-23 00:29:48.479607] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:33.994 [2024-07-23 00:29:48.485415] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:33.994 [2024-07-23 00:29:48.485449] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:33.994 [2024-07-23 00:29:48.485462] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.790 ms 00:25:33.994 [2024-07-23 00:29:48.485471] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:33.994 [2024-07-23 00:29:48.485598] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:25:33.994 [2024-07-23 00:29:48.485619] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:25:33.994 [2024-07-23 00:29:48.485632] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:33.994 [2024-07-23 00:29:48.485648] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:25:33.994 [2024-07-23 00:29:48.485662] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:25:33.994 [2024-07-23 00:29:48.485672] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:33.994 [2024-07-23 00:29:48.495600] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:33.994 [2024-07-23 00:29:48.495629] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:25:33.994 [2024-07-23 00:29:48.495639] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.928 ms 00:25:33.994 [2024-07-23 00:29:48.495665] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:33.994 [2024-07-23 00:29:48.495776] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:33.994 [2024-07-23 00:29:48.495787] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:25:33.994 [2024-07-23 00:29:48.495805] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.076 ms 00:25:33.994 [2024-07-23 00:29:48.495815] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:33.994 [2024-07-23 00:29:48.495874] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:33.994 [2024-07-23 00:29:48.495889] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:25:33.994 [2024-07-23 00:29:48.495899] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:25:33.994 [2024-07-23 00:29:48.495908] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:33.994 [2024-07-23 00:29:48.496172] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:33.994 [2024-07-23 00:29:48.496193] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:25:33.994 [2024-07-23 00:29:48.496204] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.211 ms 00:25:33.994 [2024-07-23 00:29:48.496213] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:33.994 [2024-07-23 00:29:48.496232] mngt/ftl_mngt_p2l.c: 132:ftl_mngt_p2l_restore_ckpt: *NOTICE*: [FTL][ftl0] SHM: skipping p2l ckpt restore 00:25:33.994 [2024-07-23 00:29:48.496245] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:33.994 [2024-07-23 00:29:48.496284] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:25:33.994 [2024-07-23 00:29:48.496294] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:25:33.994 [2024-07-23 00:29:48.496304] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:33.994 [2024-07-23 00:29:48.503670] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:25:33.994 [2024-07-23 00:29:48.503867] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:33.994 [2024-07-23 00:29:48.503880] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:25:33.994 [2024-07-23 00:29:48.503891] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.554 ms 00:25:33.994 [2024-07-23 00:29:48.503904] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:33.994 [2024-07-23 00:29:48.506164] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:33.994 [2024-07-23 00:29:48.506194] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:25:33.994 [2024-07-23 00:29:48.506205] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.241 ms 00:25:33.994 [2024-07-23 00:29:48.506215] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:33.994 [2024-07-23 00:29:48.506315] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:33.994 [2024-07-23 00:29:48.506340] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:25:33.994 [2024-07-23 00:29:48.506351] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:25:33.994 [2024-07-23 00:29:48.506364] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:33.994 [2024-07-23 00:29:48.506406] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:33.994 [2024-07-23 00:29:48.506417] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:25:33.994 [2024-07-23 00:29:48.506427] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:25:33.994 [2024-07-23 00:29:48.506437] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:33.994 [2024-07-23 00:29:48.506469] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:25:33.994 [2024-07-23 00:29:48.506480] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:33.994 [2024-07-23 00:29:48.506497] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:25:33.994 [2024-07-23 00:29:48.506513] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:25:33.994 [2024-07-23 00:29:48.506523] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:33.994 [2024-07-23 00:29:48.510483] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:33.994 [2024-07-23 00:29:48.510517] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:25:33.994 [2024-07-23 00:29:48.510529] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.938 ms 00:25:33.994 [2024-07-23 00:29:48.510539] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:33.994 [2024-07-23 00:29:48.510603] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:33.994 [2024-07-23 00:29:48.510615] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:25:33.994 [2024-07-23 00:29:48.510625] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:25:33.994 [2024-07-23 00:29:48.510635] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:33.994 [2024-07-23 00:29:48.511752] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 64.882 ms, result 0 00:26:12.240  Copying: 27/1024 [MB] (27 MBps) Copying: 56/1024 [MB] (28 MBps) Copying: 83/1024 [MB] (27 MBps) Copying: 111/1024 [MB] (28 MBps) Copying: 139/1024 [MB] (27 MBps) Copying: 166/1024 [MB] (27 MBps) Copying: 194/1024 [MB] (27 MBps) Copying: 221/1024 [MB] (26 MBps) Copying: 248/1024 [MB] (26 MBps) Copying: 276/1024 [MB] (28 MBps) Copying: 303/1024 [MB] (26 MBps) Copying: 330/1024 [MB] (27 MBps) Copying: 356/1024 [MB] (26 MBps) Copying: 384/1024 [MB] (28 MBps) Copying: 411/1024 [MB] (26 MBps) Copying: 439/1024 [MB] (28 MBps) Copying: 467/1024 [MB] (27 MBps) Copying: 493/1024 [MB] (26 MBps) Copying: 520/1024 [MB] (27 MBps) Copying: 548/1024 [MB] (27 MBps) Copying: 574/1024 [MB] (26 MBps) Copying: 599/1024 [MB] (25 MBps) Copying: 625/1024 [MB] (25 MBps) Copying: 650/1024 [MB] (25 MBps) Copying: 675/1024 [MB] (25 MBps) Copying: 700/1024 [MB] (25 MBps) Copying: 727/1024 [MB] (26 MBps) Copying: 752/1024 [MB] (25 MBps) Copying: 778/1024 [MB] (25 MBps) Copying: 803/1024 [MB] (25 MBps) Copying: 830/1024 [MB] (26 MBps) Copying: 857/1024 [MB] (26 MBps) Copying: 883/1024 [MB] (26 MBps) Copying: 910/1024 [MB] (27 MBps) Copying: 938/1024 [MB] (27 MBps) Copying: 965/1024 [MB] (27 MBps) Copying: 994/1024 [MB] (29 MBps) Copying: 1021/1024 [MB] (26 MBps) Copying: 1024/1024 [MB] (average 26 MBps)[2024-07-23 00:30:26.790937] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:12.240 [2024-07-23 00:30:26.791010] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:26:12.240 [2024-07-23 00:30:26.791033] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:26:12.240 [2024-07-23 00:30:26.791050] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:12.240 [2024-07-23 00:30:26.791091] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:26:12.240 [2024-07-23 00:30:26.791800] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:12.240 [2024-07-23 00:30:26.791825] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:26:12.240 [2024-07-23 00:30:26.791842] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.687 ms 00:26:12.240 [2024-07-23 00:30:26.791857] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:12.240 [2024-07-23 00:30:26.792090] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:12.240 [2024-07-23 00:30:26.792113] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:26:12.240 [2024-07-23 00:30:26.792129] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.206 ms 00:26:12.240 [2024-07-23 00:30:26.792144] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:12.240 [2024-07-23 00:30:26.792188] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:12.240 [2024-07-23 00:30:26.792205] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:26:12.240 [2024-07-23 00:30:26.792221] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:26:12.240 [2024-07-23 00:30:26.792236] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:12.240 [2024-07-23 00:30:26.792310] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:12.240 [2024-07-23 00:30:26.792327] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:26:12.240 [2024-07-23 00:30:26.792343] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:26:12.240 [2024-07-23 00:30:26.792358] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:12.240 [2024-07-23 00:30:26.792381] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:26:12.240 [2024-07-23 00:30:26.792401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:26:12.240 [2024-07-23 00:30:26.792423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:26:12.240 [2024-07-23 00:30:26.792441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:26:12.240 [2024-07-23 00:30:26.792458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:26:12.240 [2024-07-23 00:30:26.792476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:26:12.240 [2024-07-23 00:30:26.792493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:26:12.240 [2024-07-23 00:30:26.792509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:26:12.240 [2024-07-23 00:30:26.792526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:26:12.240 [2024-07-23 00:30:26.792544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:26:12.240 [2024-07-23 00:30:26.792561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:26:12.240 [2024-07-23 00:30:26.792579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:26:12.240 [2024-07-23 00:30:26.792596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:26:12.240 [2024-07-23 00:30:26.792612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:26:12.240 [2024-07-23 00:30:26.792630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:26:12.240 [2024-07-23 00:30:26.792646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:26:12.240 [2024-07-23 00:30:26.792663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:26:12.240 [2024-07-23 00:30:26.792680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:26:12.240 [2024-07-23 00:30:26.792696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:26:12.240 [2024-07-23 00:30:26.792714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:26:12.240 [2024-07-23 00:30:26.792731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:26:12.240 [2024-07-23 00:30:26.792748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:26:12.240 [2024-07-23 00:30:26.792765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:26:12.240 [2024-07-23 00:30:26.792782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:26:12.240 [2024-07-23 00:30:26.792800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:26:12.240 [2024-07-23 00:30:26.792816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:26:12.240 [2024-07-23 00:30:26.792833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:26:12.240 [2024-07-23 00:30:26.792850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:26:12.240 [2024-07-23 00:30:26.792866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:26:12.240 [2024-07-23 00:30:26.792883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:26:12.240 [2024-07-23 00:30:26.792900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:26:12.240 [2024-07-23 00:30:26.792916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:26:12.240 [2024-07-23 00:30:26.792934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:26:12.240 [2024-07-23 00:30:26.792951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:26:12.240 [2024-07-23 00:30:26.792981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:26:12.240 [2024-07-23 00:30:26.792998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:26:12.240 [2024-07-23 00:30:26.793015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:26:12.240 [2024-07-23 00:30:26.793032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:26:12.240 [2024-07-23 00:30:26.793050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:26:12.240 [2024-07-23 00:30:26.793067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:26:12.240 [2024-07-23 00:30:26.793084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:26:12.240 [2024-07-23 00:30:26.793101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:26:12.240 [2024-07-23 00:30:26.793118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:26:12.241 [2024-07-23 00:30:26.793135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:26:12.241 [2024-07-23 00:30:26.793152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:26:12.241 [2024-07-23 00:30:26.793170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:26:12.241 [2024-07-23 00:30:26.793187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:26:12.241 [2024-07-23 00:30:26.793204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:26:12.241 [2024-07-23 00:30:26.793465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:26:12.241 [2024-07-23 00:30:26.793489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:26:12.241 [2024-07-23 00:30:26.793506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:26:12.241 [2024-07-23 00:30:26.793523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:26:12.241 [2024-07-23 00:30:26.793541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:26:12.241 [2024-07-23 00:30:26.793561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:26:12.241 [2024-07-23 00:30:26.793578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:26:12.241 [2024-07-23 00:30:26.793596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:26:12.241 [2024-07-23 00:30:26.793613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:26:12.241 [2024-07-23 00:30:26.793630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:26:12.241 [2024-07-23 00:30:26.793647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:26:12.241 [2024-07-23 00:30:26.793664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:26:12.241 [2024-07-23 00:30:26.793681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:26:12.241 [2024-07-23 00:30:26.793698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:26:12.241 [2024-07-23 00:30:26.793715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:26:12.241 [2024-07-23 00:30:26.793732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:26:12.241 [2024-07-23 00:30:26.793749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:26:12.241 [2024-07-23 00:30:26.793766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:26:12.241 [2024-07-23 00:30:26.793783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:26:12.241 [2024-07-23 00:30:26.793800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:26:12.241 [2024-07-23 00:30:26.793817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:26:12.241 [2024-07-23 00:30:26.793833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:26:12.241 [2024-07-23 00:30:26.793850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:26:12.241 [2024-07-23 00:30:26.793867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:26:12.241 [2024-07-23 00:30:26.793883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:26:12.241 [2024-07-23 00:30:26.793900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:26:12.241 [2024-07-23 00:30:26.793917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:26:12.241 [2024-07-23 00:30:26.793934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:26:12.241 [2024-07-23 00:30:26.793951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:26:12.241 [2024-07-23 00:30:26.793967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:26:12.241 [2024-07-23 00:30:26.793984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:26:12.241 [2024-07-23 00:30:26.794001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:26:12.241 [2024-07-23 00:30:26.794018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:26:12.241 [2024-07-23 00:30:26.794035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:26:12.241 [2024-07-23 00:30:26.794051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:26:12.241 [2024-07-23 00:30:26.794068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:26:12.241 [2024-07-23 00:30:26.794085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:26:12.241 [2024-07-23 00:30:26.794103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:26:12.241 [2024-07-23 00:30:26.794120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:26:12.241 [2024-07-23 00:30:26.794136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:26:12.241 [2024-07-23 00:30:26.794153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:26:12.241 [2024-07-23 00:30:26.794170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:26:12.241 [2024-07-23 00:30:26.794187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:26:12.241 [2024-07-23 00:30:26.794203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:26:12.241 [2024-07-23 00:30:26.794220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:26:12.241 [2024-07-23 00:30:26.794237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:26:12.241 [2024-07-23 00:30:26.794254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:26:12.241 [2024-07-23 00:30:26.794286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:26:12.241 [2024-07-23 00:30:26.794305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:26:12.241 [2024-07-23 00:30:26.794323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:26:12.241 [2024-07-23 00:30:26.794340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:26:12.241 [2024-07-23 00:30:26.794357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:26:12.241 [2024-07-23 00:30:26.794374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:26:12.241 [2024-07-23 00:30:26.794398] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:26:12.241 [2024-07-23 00:30:26.794428] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 88705295-19ad-48c6-b0e8-453394f04b93 00:26:12.241 [2024-07-23 00:30:26.794445] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:26:12.241 [2024-07-23 00:30:26.794461] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 32 00:26:12.241 [2024-07-23 00:30:26.794476] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:26:12.241 [2024-07-23 00:30:26.794491] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:26:12.241 [2024-07-23 00:30:26.794506] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:26:12.241 [2024-07-23 00:30:26.794521] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:26:12.241 [2024-07-23 00:30:26.794536] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:26:12.241 [2024-07-23 00:30:26.794551] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:26:12.241 [2024-07-23 00:30:26.794565] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:26:12.241 [2024-07-23 00:30:26.794580] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:12.241 [2024-07-23 00:30:26.794596] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:26:12.241 [2024-07-23 00:30:26.794613] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.204 ms 00:26:12.241 [2024-07-23 00:30:26.794632] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:12.241 [2024-07-23 00:30:26.796483] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:12.241 [2024-07-23 00:30:26.796518] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:26:12.241 [2024-07-23 00:30:26.796536] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.827 ms 00:26:12.241 [2024-07-23 00:30:26.796552] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:12.241 [2024-07-23 00:30:26.796677] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:12.241 [2024-07-23 00:30:26.796701] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:26:12.241 [2024-07-23 00:30:26.796722] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.095 ms 00:26:12.241 [2024-07-23 00:30:26.796737] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:12.241 [2024-07-23 00:30:26.803314] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:12.241 [2024-07-23 00:30:26.803366] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:12.241 [2024-07-23 00:30:26.803385] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:12.241 [2024-07-23 00:30:26.803402] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:12.241 [2024-07-23 00:30:26.803472] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:12.241 [2024-07-23 00:30:26.803489] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:12.241 [2024-07-23 00:30:26.803509] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:12.241 [2024-07-23 00:30:26.803524] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:12.241 [2024-07-23 00:30:26.803598] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:12.241 [2024-07-23 00:30:26.803623] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:12.241 [2024-07-23 00:30:26.803640] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:12.241 [2024-07-23 00:30:26.803655] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:12.241 [2024-07-23 00:30:26.803681] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:12.241 [2024-07-23 00:30:26.803696] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:12.241 [2024-07-23 00:30:26.803722] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:12.242 [2024-07-23 00:30:26.803749] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:12.242 [2024-07-23 00:30:26.818086] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:12.242 [2024-07-23 00:30:26.818185] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:12.242 [2024-07-23 00:30:26.818217] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:12.242 [2024-07-23 00:30:26.818240] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:12.242 [2024-07-23 00:30:26.830235] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:12.242 [2024-07-23 00:30:26.830494] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:12.242 [2024-07-23 00:30:26.830522] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:12.242 [2024-07-23 00:30:26.830537] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:12.242 [2024-07-23 00:30:26.830626] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:12.242 [2024-07-23 00:30:26.830643] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:12.242 [2024-07-23 00:30:26.830659] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:12.242 [2024-07-23 00:30:26.830673] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:12.242 [2024-07-23 00:30:26.830723] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:12.242 [2024-07-23 00:30:26.830744] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:12.242 [2024-07-23 00:30:26.830760] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:12.242 [2024-07-23 00:30:26.830776] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:12.242 [2024-07-23 00:30:26.830856] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:12.242 [2024-07-23 00:30:26.830874] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:12.242 [2024-07-23 00:30:26.830889] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:12.242 [2024-07-23 00:30:26.830904] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:12.242 [2024-07-23 00:30:26.830942] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:12.242 [2024-07-23 00:30:26.830959] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:26:12.242 [2024-07-23 00:30:26.830975] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:12.242 [2024-07-23 00:30:26.830989] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:12.242 [2024-07-23 00:30:26.831051] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:12.242 [2024-07-23 00:30:26.831068] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:12.242 [2024-07-23 00:30:26.831083] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:12.242 [2024-07-23 00:30:26.831106] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:12.242 [2024-07-23 00:30:26.831174] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:12.242 [2024-07-23 00:30:26.831190] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:12.242 [2024-07-23 00:30:26.831205] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:12.242 [2024-07-23 00:30:26.831220] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:12.242 [2024-07-23 00:30:26.831383] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 40.470 ms, result 0 00:26:12.501 00:26:12.501 00:26:12.501 00:30:27 ftl.ftl_restore_fast -- ftl/restore.sh@76 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:26:14.405 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:26:14.405 00:30:28 ftl.ftl_restore_fast -- ftl/restore.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --seek=131072 00:26:14.405 [2024-07-23 00:30:28.843399] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:26:14.405 [2024-07-23 00:30:28.843555] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95403 ] 00:26:14.405 [2024-07-23 00:30:28.995683] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:14.405 [2024-07-23 00:30:29.039704] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:14.665 [2024-07-23 00:30:29.140692] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:26:14.665 [2024-07-23 00:30:29.140755] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:26:14.665 [2024-07-23 00:30:29.291807] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:14.665 [2024-07-23 00:30:29.291859] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:26:14.665 [2024-07-23 00:30:29.291883] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:26:14.665 [2024-07-23 00:30:29.291894] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:14.665 [2024-07-23 00:30:29.291951] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:14.665 [2024-07-23 00:30:29.291963] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:14.665 [2024-07-23 00:30:29.291973] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:26:14.665 [2024-07-23 00:30:29.291986] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:14.665 [2024-07-23 00:30:29.292007] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:26:14.665 [2024-07-23 00:30:29.292254] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:26:14.665 [2024-07-23 00:30:29.292292] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:14.665 [2024-07-23 00:30:29.292306] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:14.665 [2024-07-23 00:30:29.292317] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.290 ms 00:26:14.665 [2024-07-23 00:30:29.292326] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:14.665 [2024-07-23 00:30:29.292631] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 1, shm_clean 1 00:26:14.665 [2024-07-23 00:30:29.292657] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:14.665 [2024-07-23 00:30:29.292676] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:26:14.665 [2024-07-23 00:30:29.292694] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:26:14.665 [2024-07-23 00:30:29.292708] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:14.665 [2024-07-23 00:30:29.292755] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:14.665 [2024-07-23 00:30:29.292766] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:26:14.665 [2024-07-23 00:30:29.292776] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:26:14.665 [2024-07-23 00:30:29.292786] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:14.665 [2024-07-23 00:30:29.293165] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:14.665 [2024-07-23 00:30:29.293185] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:14.665 [2024-07-23 00:30:29.293196] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.347 ms 00:26:14.665 [2024-07-23 00:30:29.293216] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:14.665 [2024-07-23 00:30:29.293308] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:14.665 [2024-07-23 00:30:29.293322] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:14.666 [2024-07-23 00:30:29.293339] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.075 ms 00:26:14.666 [2024-07-23 00:30:29.293349] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:14.666 [2024-07-23 00:30:29.293376] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:14.666 [2024-07-23 00:30:29.293388] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:26:14.666 [2024-07-23 00:30:29.293398] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:26:14.666 [2024-07-23 00:30:29.293411] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:14.666 [2024-07-23 00:30:29.293435] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:26:14.666 [2024-07-23 00:30:29.295186] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:14.666 [2024-07-23 00:30:29.295207] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:14.666 [2024-07-23 00:30:29.295218] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.758 ms 00:26:14.666 [2024-07-23 00:30:29.295228] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:14.666 [2024-07-23 00:30:29.295276] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:14.666 [2024-07-23 00:30:29.295291] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:26:14.666 [2024-07-23 00:30:29.295302] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:26:14.666 [2024-07-23 00:30:29.295311] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:14.666 [2024-07-23 00:30:29.295334] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:26:14.666 [2024-07-23 00:30:29.295356] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:26:14.666 [2024-07-23 00:30:29.295394] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:26:14.666 [2024-07-23 00:30:29.295413] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:26:14.666 [2024-07-23 00:30:29.295494] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:26:14.666 [2024-07-23 00:30:29.295511] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:26:14.666 [2024-07-23 00:30:29.295530] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:26:14.666 [2024-07-23 00:30:29.295546] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:26:14.666 [2024-07-23 00:30:29.295558] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:26:14.666 [2024-07-23 00:30:29.295569] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:26:14.666 [2024-07-23 00:30:29.295585] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:26:14.666 [2024-07-23 00:30:29.295601] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:26:14.666 [2024-07-23 00:30:29.295611] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:26:14.666 [2024-07-23 00:30:29.295624] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:14.666 [2024-07-23 00:30:29.295633] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:26:14.666 [2024-07-23 00:30:29.295644] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.289 ms 00:26:14.666 [2024-07-23 00:30:29.295653] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:14.666 [2024-07-23 00:30:29.295719] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:14.666 [2024-07-23 00:30:29.295732] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:26:14.666 [2024-07-23 00:30:29.295742] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:26:14.666 [2024-07-23 00:30:29.295751] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:14.666 [2024-07-23 00:30:29.295843] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:26:14.666 [2024-07-23 00:30:29.295862] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:26:14.666 [2024-07-23 00:30:29.295872] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:14.666 [2024-07-23 00:30:29.295889] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:14.666 [2024-07-23 00:30:29.295899] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:26:14.666 [2024-07-23 00:30:29.295908] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:26:14.666 [2024-07-23 00:30:29.295918] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:26:14.666 [2024-07-23 00:30:29.295927] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:26:14.666 [2024-07-23 00:30:29.295941] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:26:14.666 [2024-07-23 00:30:29.295950] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:14.666 [2024-07-23 00:30:29.295959] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:26:14.666 [2024-07-23 00:30:29.295968] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:26:14.666 [2024-07-23 00:30:29.295978] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:14.666 [2024-07-23 00:30:29.295987] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:26:14.666 [2024-07-23 00:30:29.295996] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:26:14.666 [2024-07-23 00:30:29.296005] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:14.666 [2024-07-23 00:30:29.296014] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:26:14.666 [2024-07-23 00:30:29.296023] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:26:14.666 [2024-07-23 00:30:29.296032] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:14.666 [2024-07-23 00:30:29.296041] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:26:14.666 [2024-07-23 00:30:29.296050] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:26:14.666 [2024-07-23 00:30:29.296059] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:14.666 [2024-07-23 00:30:29.296068] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:26:14.666 [2024-07-23 00:30:29.296077] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:26:14.666 [2024-07-23 00:30:29.296088] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:14.666 [2024-07-23 00:30:29.296097] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:26:14.666 [2024-07-23 00:30:29.296106] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:26:14.666 [2024-07-23 00:30:29.296115] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:14.666 [2024-07-23 00:30:29.296123] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:26:14.666 [2024-07-23 00:30:29.296132] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:26:14.666 [2024-07-23 00:30:29.296141] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:14.666 [2024-07-23 00:30:29.296150] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:26:14.666 [2024-07-23 00:30:29.296158] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:26:14.666 [2024-07-23 00:30:29.296167] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:14.666 [2024-07-23 00:30:29.296176] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:26:14.666 [2024-07-23 00:30:29.296185] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:26:14.666 [2024-07-23 00:30:29.296194] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:14.666 [2024-07-23 00:30:29.296203] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:26:14.666 [2024-07-23 00:30:29.296212] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:26:14.666 [2024-07-23 00:30:29.296221] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:14.666 [2024-07-23 00:30:29.296235] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:26:14.666 [2024-07-23 00:30:29.296244] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:26:14.666 [2024-07-23 00:30:29.296252] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:14.666 [2024-07-23 00:30:29.296271] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:26:14.666 [2024-07-23 00:30:29.296283] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:26:14.666 [2024-07-23 00:30:29.296296] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:14.666 [2024-07-23 00:30:29.296306] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:14.666 [2024-07-23 00:30:29.296315] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:26:14.666 [2024-07-23 00:30:29.296324] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:26:14.666 [2024-07-23 00:30:29.296333] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:26:14.666 [2024-07-23 00:30:29.296342] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:26:14.666 [2024-07-23 00:30:29.296351] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:26:14.666 [2024-07-23 00:30:29.296360] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:26:14.666 [2024-07-23 00:30:29.296370] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:26:14.666 [2024-07-23 00:30:29.296382] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:14.666 [2024-07-23 00:30:29.296394] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:26:14.666 [2024-07-23 00:30:29.296407] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:26:14.666 [2024-07-23 00:30:29.296417] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:26:14.666 [2024-07-23 00:30:29.296427] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:26:14.666 [2024-07-23 00:30:29.296437] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:26:14.666 [2024-07-23 00:30:29.296447] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:26:14.666 [2024-07-23 00:30:29.296457] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:26:14.666 [2024-07-23 00:30:29.296467] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:26:14.666 [2024-07-23 00:30:29.296477] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:26:14.667 [2024-07-23 00:30:29.296487] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:26:14.667 [2024-07-23 00:30:29.296497] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:26:14.667 [2024-07-23 00:30:29.296506] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:26:14.667 [2024-07-23 00:30:29.296516] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:26:14.667 [2024-07-23 00:30:29.296527] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:26:14.667 [2024-07-23 00:30:29.296537] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:26:14.667 [2024-07-23 00:30:29.296547] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:14.667 [2024-07-23 00:30:29.296565] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:26:14.667 [2024-07-23 00:30:29.296578] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:26:14.667 [2024-07-23 00:30:29.296589] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:26:14.667 [2024-07-23 00:30:29.296599] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:26:14.667 [2024-07-23 00:30:29.296618] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:14.667 [2024-07-23 00:30:29.296629] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:26:14.667 [2024-07-23 00:30:29.296639] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.827 ms 00:26:14.667 [2024-07-23 00:30:29.296649] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:14.667 [2024-07-23 00:30:29.315345] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:14.667 [2024-07-23 00:30:29.315386] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:14.667 [2024-07-23 00:30:29.315403] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.682 ms 00:26:14.667 [2024-07-23 00:30:29.315416] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:14.667 [2024-07-23 00:30:29.315525] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:14.667 [2024-07-23 00:30:29.315556] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:26:14.667 [2024-07-23 00:30:29.315570] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:26:14.667 [2024-07-23 00:30:29.315583] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:14.667 [2024-07-23 00:30:29.326831] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:14.667 [2024-07-23 00:30:29.326874] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:14.667 [2024-07-23 00:30:29.326891] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.180 ms 00:26:14.667 [2024-07-23 00:30:29.326904] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:14.667 [2024-07-23 00:30:29.326947] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:14.667 [2024-07-23 00:30:29.326961] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:14.667 [2024-07-23 00:30:29.326979] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:26:14.667 [2024-07-23 00:30:29.326992] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:14.667 [2024-07-23 00:30:29.327123] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:14.667 [2024-07-23 00:30:29.327148] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:14.667 [2024-07-23 00:30:29.327162] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:26:14.667 [2024-07-23 00:30:29.327175] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:14.667 [2024-07-23 00:30:29.327326] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:14.667 [2024-07-23 00:30:29.327344] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:14.667 [2024-07-23 00:30:29.327358] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.126 ms 00:26:14.667 [2024-07-23 00:30:29.327370] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:14.667 [2024-07-23 00:30:29.333681] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:14.667 [2024-07-23 00:30:29.333718] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:14.667 [2024-07-23 00:30:29.333730] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.295 ms 00:26:14.667 [2024-07-23 00:30:29.333740] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:14.667 [2024-07-23 00:30:29.333863] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:26:14.667 [2024-07-23 00:30:29.333879] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:26:14.667 [2024-07-23 00:30:29.333892] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:14.667 [2024-07-23 00:30:29.333902] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:26:14.667 [2024-07-23 00:30:29.333916] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:26:14.667 [2024-07-23 00:30:29.333926] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:14.667 [2024-07-23 00:30:29.344429] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:14.667 [2024-07-23 00:30:29.344459] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:26:14.667 [2024-07-23 00:30:29.344472] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.492 ms 00:26:14.667 [2024-07-23 00:30:29.344482] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:14.667 [2024-07-23 00:30:29.344584] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:14.667 [2024-07-23 00:30:29.344595] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:26:14.667 [2024-07-23 00:30:29.344606] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.077 ms 00:26:14.667 [2024-07-23 00:30:29.344615] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:14.667 [2024-07-23 00:30:29.344679] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:14.667 [2024-07-23 00:30:29.344700] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:26:14.667 [2024-07-23 00:30:29.344710] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:26:14.667 [2024-07-23 00:30:29.344720] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:14.667 [2024-07-23 00:30:29.344984] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:14.667 [2024-07-23 00:30:29.344999] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:26:14.667 [2024-07-23 00:30:29.345009] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.223 ms 00:26:14.667 [2024-07-23 00:30:29.345020] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:14.667 [2024-07-23 00:30:29.345038] mngt/ftl_mngt_p2l.c: 132:ftl_mngt_p2l_restore_ckpt: *NOTICE*: [FTL][ftl0] SHM: skipping p2l ckpt restore 00:26:14.667 [2024-07-23 00:30:29.345051] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:14.667 [2024-07-23 00:30:29.345065] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:26:14.667 [2024-07-23 00:30:29.345075] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:26:14.667 [2024-07-23 00:30:29.345084] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:14.926 [2024-07-23 00:30:29.352364] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:26:14.926 [2024-07-23 00:30:29.352544] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:14.926 [2024-07-23 00:30:29.352556] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:26:14.926 [2024-07-23 00:30:29.352568] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.443 ms 00:26:14.926 [2024-07-23 00:30:29.352581] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:14.926 [2024-07-23 00:30:29.354599] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:14.926 [2024-07-23 00:30:29.354629] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:26:14.926 [2024-07-23 00:30:29.354655] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.000 ms 00:26:14.926 [2024-07-23 00:30:29.354665] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:14.926 [2024-07-23 00:30:29.354745] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:14.926 [2024-07-23 00:30:29.354757] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:26:14.926 [2024-07-23 00:30:29.354767] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:26:14.926 [2024-07-23 00:30:29.354780] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:14.926 [2024-07-23 00:30:29.354826] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:14.926 [2024-07-23 00:30:29.354837] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:26:14.926 [2024-07-23 00:30:29.354854] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:26:14.926 [2024-07-23 00:30:29.354874] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:14.926 [2024-07-23 00:30:29.354906] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:26:14.926 [2024-07-23 00:30:29.354918] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:14.926 [2024-07-23 00:30:29.354927] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:26:14.926 [2024-07-23 00:30:29.354952] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:26:14.926 [2024-07-23 00:30:29.354962] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:14.926 [2024-07-23 00:30:29.359614] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:14.926 [2024-07-23 00:30:29.359646] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:26:14.926 [2024-07-23 00:30:29.359674] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.638 ms 00:26:14.926 [2024-07-23 00:30:29.359692] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:14.926 [2024-07-23 00:30:29.359759] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:14.926 [2024-07-23 00:30:29.359772] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:26:14.926 [2024-07-23 00:30:29.359782] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:26:14.926 [2024-07-23 00:30:29.359791] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:14.926 [2024-07-23 00:30:29.360823] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 68.735 ms, result 0 00:26:55.696  Copying: 24/1024 [MB] (24 MBps) Copying: 49/1024 [MB] (25 MBps) Copying: 75/1024 [MB] (25 MBps) Copying: 100/1024 [MB] (25 MBps) Copying: 125/1024 [MB] (24 MBps) Copying: 150/1024 [MB] (25 MBps) Copying: 175/1024 [MB] (25 MBps) Copying: 201/1024 [MB] (25 MBps) Copying: 225/1024 [MB] (24 MBps) Copying: 251/1024 [MB] (25 MBps) Copying: 276/1024 [MB] (25 MBps) Copying: 302/1024 [MB] (25 MBps) Copying: 327/1024 [MB] (25 MBps) Copying: 352/1024 [MB] (25 MBps) Copying: 378/1024 [MB] (25 MBps) Copying: 405/1024 [MB] (27 MBps) Copying: 432/1024 [MB] (27 MBps) Copying: 458/1024 [MB] (26 MBps) Copying: 484/1024 [MB] (25 MBps) Copying: 509/1024 [MB] (25 MBps) Copying: 534/1024 [MB] (25 MBps) Copying: 561/1024 [MB] (26 MBps) Copying: 586/1024 [MB] (24 MBps) Copying: 611/1024 [MB] (25 MBps) Copying: 636/1024 [MB] (25 MBps) Copying: 662/1024 [MB] (25 MBps) Copying: 688/1024 [MB] (25 MBps) Copying: 713/1024 [MB] (25 MBps) Copying: 738/1024 [MB] (24 MBps) Copying: 763/1024 [MB] (24 MBps) Copying: 788/1024 [MB] (25 MBps) Copying: 813/1024 [MB] (24 MBps) Copying: 838/1024 [MB] (25 MBps) Copying: 862/1024 [MB] (24 MBps) Copying: 888/1024 [MB] (25 MBps) Copying: 913/1024 [MB] (25 MBps) Copying: 939/1024 [MB] (25 MBps) Copying: 965/1024 [MB] (26 MBps) Copying: 991/1024 [MB] (25 MBps) Copying: 1017/1024 [MB] (25 MBps) Copying: 1024/1024 [MB] (average 25 MBps)[2024-07-23 00:31:10.199448] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:55.696 [2024-07-23 00:31:10.199507] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:26:55.696 [2024-07-23 00:31:10.199524] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:26:55.696 [2024-07-23 00:31:10.199535] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:55.696 [2024-07-23 00:31:10.203452] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:26:55.696 [2024-07-23 00:31:10.204816] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:55.696 [2024-07-23 00:31:10.204850] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:26:55.696 [2024-07-23 00:31:10.204863] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.323 ms 00:26:55.696 [2024-07-23 00:31:10.204875] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:55.696 [2024-07-23 00:31:10.215970] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:55.696 [2024-07-23 00:31:10.216005] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:26:55.696 [2024-07-23 00:31:10.216035] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.890 ms 00:26:55.696 [2024-07-23 00:31:10.216045] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:55.696 [2024-07-23 00:31:10.216085] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:55.696 [2024-07-23 00:31:10.216096] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:26:55.696 [2024-07-23 00:31:10.216108] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:26:55.696 [2024-07-23 00:31:10.216118] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:55.696 [2024-07-23 00:31:10.216164] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:55.696 [2024-07-23 00:31:10.216178] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:26:55.696 [2024-07-23 00:31:10.216188] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:26:55.696 [2024-07-23 00:31:10.216198] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:55.696 [2024-07-23 00:31:10.216214] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:26:55.696 [2024-07-23 00:31:10.216227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 95744 / 261120 wr_cnt: 1 state: open 00:26:55.696 [2024-07-23 00:31:10.216239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:26:55.696 [2024-07-23 00:31:10.216251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:26:55.696 [2024-07-23 00:31:10.216261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:26:55.696 [2024-07-23 00:31:10.216272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:26:55.696 [2024-07-23 00:31:10.216294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:26:55.696 [2024-07-23 00:31:10.216304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:26:55.696 [2024-07-23 00:31:10.216315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:26:55.697 [2024-07-23 00:31:10.216325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:26:55.697 [2024-07-23 00:31:10.216336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:26:55.697 [2024-07-23 00:31:10.216346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:26:55.697 [2024-07-23 00:31:10.216357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:26:55.697 [2024-07-23 00:31:10.216368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:26:55.697 [2024-07-23 00:31:10.216378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:26:55.697 [2024-07-23 00:31:10.216388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:26:55.697 [2024-07-23 00:31:10.216399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:26:55.697 [2024-07-23 00:31:10.216410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:26:55.697 [2024-07-23 00:31:10.216420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:26:55.697 [2024-07-23 00:31:10.216430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:26:55.697 [2024-07-23 00:31:10.216440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:26:55.697 [2024-07-23 00:31:10.216451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:26:55.697 [2024-07-23 00:31:10.216461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:26:55.697 [2024-07-23 00:31:10.216473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:26:55.697 [2024-07-23 00:31:10.216483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:26:55.697 [2024-07-23 00:31:10.216493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:26:55.697 [2024-07-23 00:31:10.216503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:26:55.697 [2024-07-23 00:31:10.216513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:26:55.697 [2024-07-23 00:31:10.216525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:26:55.697 [2024-07-23 00:31:10.216535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:26:55.697 [2024-07-23 00:31:10.216546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:26:55.697 [2024-07-23 00:31:10.216556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:26:55.697 [2024-07-23 00:31:10.216566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:26:55.697 [2024-07-23 00:31:10.216576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:26:55.697 [2024-07-23 00:31:10.216586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:26:55.697 [2024-07-23 00:31:10.216596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:26:55.697 [2024-07-23 00:31:10.216606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:26:55.697 [2024-07-23 00:31:10.216616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:26:55.697 [2024-07-23 00:31:10.216626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:26:55.697 [2024-07-23 00:31:10.216636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:26:55.697 [2024-07-23 00:31:10.216646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:26:55.697 [2024-07-23 00:31:10.216656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:26:55.697 [2024-07-23 00:31:10.216666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:26:55.697 [2024-07-23 00:31:10.216677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:26:55.697 [2024-07-23 00:31:10.216687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:26:55.697 [2024-07-23 00:31:10.216697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:26:55.697 [2024-07-23 00:31:10.216707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:26:55.697 [2024-07-23 00:31:10.216717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:26:55.697 [2024-07-23 00:31:10.216728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:26:55.697 [2024-07-23 00:31:10.216738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:26:55.697 [2024-07-23 00:31:10.216748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:26:55.697 [2024-07-23 00:31:10.216759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:26:55.697 [2024-07-23 00:31:10.216769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:26:55.697 [2024-07-23 00:31:10.216779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:26:55.697 [2024-07-23 00:31:10.216790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:26:55.697 [2024-07-23 00:31:10.216801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:26:55.697 [2024-07-23 00:31:10.216811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:26:55.697 [2024-07-23 00:31:10.216821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:26:55.697 [2024-07-23 00:31:10.216831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:26:55.697 [2024-07-23 00:31:10.216841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:26:55.697 [2024-07-23 00:31:10.216851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:26:55.697 [2024-07-23 00:31:10.216861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:26:55.697 [2024-07-23 00:31:10.216872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:26:55.697 [2024-07-23 00:31:10.216882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:26:55.697 [2024-07-23 00:31:10.216892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:26:55.697 [2024-07-23 00:31:10.216902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:26:55.697 [2024-07-23 00:31:10.216913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:26:55.697 [2024-07-23 00:31:10.216923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:26:55.697 [2024-07-23 00:31:10.216934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:26:55.697 [2024-07-23 00:31:10.216944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:26:55.697 [2024-07-23 00:31:10.216962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:26:55.697 [2024-07-23 00:31:10.216973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:26:55.697 [2024-07-23 00:31:10.216984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:26:55.697 [2024-07-23 00:31:10.216994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:26:55.697 [2024-07-23 00:31:10.217005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:26:55.697 [2024-07-23 00:31:10.217015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:26:55.697 [2024-07-23 00:31:10.217026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:26:55.697 [2024-07-23 00:31:10.217036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:26:55.697 [2024-07-23 00:31:10.217046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:26:55.697 [2024-07-23 00:31:10.217057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:26:55.697 [2024-07-23 00:31:10.217067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:26:55.697 [2024-07-23 00:31:10.217077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:26:55.697 [2024-07-23 00:31:10.217088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:26:55.697 [2024-07-23 00:31:10.217098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:26:55.697 [2024-07-23 00:31:10.217108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:26:55.697 [2024-07-23 00:31:10.217124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:26:55.697 [2024-07-23 00:31:10.217135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:26:55.697 [2024-07-23 00:31:10.217145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:26:55.697 [2024-07-23 00:31:10.217155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:26:55.697 [2024-07-23 00:31:10.217166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:26:55.697 [2024-07-23 00:31:10.217176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:26:55.697 [2024-07-23 00:31:10.217186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:26:55.697 [2024-07-23 00:31:10.217197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:26:55.697 [2024-07-23 00:31:10.217207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:26:55.697 [2024-07-23 00:31:10.217217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:26:55.697 [2024-07-23 00:31:10.217227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:26:55.697 [2024-07-23 00:31:10.217237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:26:55.698 [2024-07-23 00:31:10.217248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:26:55.698 [2024-07-23 00:31:10.217258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:26:55.698 [2024-07-23 00:31:10.217276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:26:55.698 [2024-07-23 00:31:10.217287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:26:55.698 [2024-07-23 00:31:10.217303] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:26:55.698 [2024-07-23 00:31:10.217324] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 88705295-19ad-48c6-b0e8-453394f04b93 00:26:55.698 [2024-07-23 00:31:10.217335] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 95744 00:26:55.698 [2024-07-23 00:31:10.217351] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 95776 00:26:55.698 [2024-07-23 00:31:10.217361] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 95744 00:26:55.698 [2024-07-23 00:31:10.217371] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0003 00:26:55.698 [2024-07-23 00:31:10.217391] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:26:55.698 [2024-07-23 00:31:10.217400] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:26:55.698 [2024-07-23 00:31:10.217411] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:26:55.698 [2024-07-23 00:31:10.217420] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:26:55.698 [2024-07-23 00:31:10.217428] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:26:55.698 [2024-07-23 00:31:10.217438] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:55.698 [2024-07-23 00:31:10.217448] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:26:55.698 [2024-07-23 00:31:10.217457] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.226 ms 00:26:55.698 [2024-07-23 00:31:10.217467] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:55.698 [2024-07-23 00:31:10.219207] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:55.698 [2024-07-23 00:31:10.219230] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:26:55.698 [2024-07-23 00:31:10.219246] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.719 ms 00:26:55.698 [2024-07-23 00:31:10.219255] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:55.698 [2024-07-23 00:31:10.219389] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:55.698 [2024-07-23 00:31:10.219401] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:26:55.698 [2024-07-23 00:31:10.219412] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.087 ms 00:26:55.698 [2024-07-23 00:31:10.219422] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:55.698 [2024-07-23 00:31:10.225365] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:55.698 [2024-07-23 00:31:10.225383] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:55.698 [2024-07-23 00:31:10.225402] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:55.698 [2024-07-23 00:31:10.225412] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:55.698 [2024-07-23 00:31:10.225461] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:55.698 [2024-07-23 00:31:10.225473] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:55.698 [2024-07-23 00:31:10.225483] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:55.698 [2024-07-23 00:31:10.225492] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:55.698 [2024-07-23 00:31:10.225531] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:55.698 [2024-07-23 00:31:10.225546] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:55.698 [2024-07-23 00:31:10.225555] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:55.698 [2024-07-23 00:31:10.225564] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:55.698 [2024-07-23 00:31:10.225581] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:55.698 [2024-07-23 00:31:10.225591] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:55.698 [2024-07-23 00:31:10.225608] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:55.698 [2024-07-23 00:31:10.225624] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:55.698 [2024-07-23 00:31:10.237044] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:55.698 [2024-07-23 00:31:10.237087] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:55.698 [2024-07-23 00:31:10.237107] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:55.698 [2024-07-23 00:31:10.237117] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:55.698 [2024-07-23 00:31:10.246475] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:55.698 [2024-07-23 00:31:10.246505] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:55.698 [2024-07-23 00:31:10.246516] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:55.698 [2024-07-23 00:31:10.246542] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:55.698 [2024-07-23 00:31:10.246589] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:55.698 [2024-07-23 00:31:10.246601] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:55.698 [2024-07-23 00:31:10.246617] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:55.698 [2024-07-23 00:31:10.246645] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:55.698 [2024-07-23 00:31:10.246669] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:55.698 [2024-07-23 00:31:10.246679] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:55.698 [2024-07-23 00:31:10.246688] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:55.698 [2024-07-23 00:31:10.246698] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:55.698 [2024-07-23 00:31:10.246757] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:55.698 [2024-07-23 00:31:10.246769] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:55.698 [2024-07-23 00:31:10.246779] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:55.698 [2024-07-23 00:31:10.246790] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:55.698 [2024-07-23 00:31:10.246851] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:55.698 [2024-07-23 00:31:10.246863] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:26:55.698 [2024-07-23 00:31:10.246872] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:55.698 [2024-07-23 00:31:10.246882] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:55.698 [2024-07-23 00:31:10.246924] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:55.698 [2024-07-23 00:31:10.246935] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:55.698 [2024-07-23 00:31:10.246951] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:55.698 [2024-07-23 00:31:10.246961] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:55.698 [2024-07-23 00:31:10.247006] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:55.698 [2024-07-23 00:31:10.247017] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:55.698 [2024-07-23 00:31:10.247027] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:55.698 [2024-07-23 00:31:10.247037] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:55.698 [2024-07-23 00:31:10.247151] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 48.429 ms, result 0 00:26:56.637 00:26:56.637 00:26:56.637 00:31:11 ftl.ftl_restore_fast -- ftl/restore.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --skip=131072 --count=262144 00:26:56.637 [2024-07-23 00:31:11.215905] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:26:56.637 [2024-07-23 00:31:11.216022] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95832 ] 00:26:56.896 [2024-07-23 00:31:11.358142] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:56.896 [2024-07-23 00:31:11.400876] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:56.896 [2024-07-23 00:31:11.502419] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:26:56.896 [2024-07-23 00:31:11.502503] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:26:57.157 [2024-07-23 00:31:11.653177] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:57.157 [2024-07-23 00:31:11.653230] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:26:57.157 [2024-07-23 00:31:11.653259] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:26:57.157 [2024-07-23 00:31:11.653269] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:57.157 [2024-07-23 00:31:11.653330] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:57.157 [2024-07-23 00:31:11.653346] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:57.157 [2024-07-23 00:31:11.653373] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:26:57.157 [2024-07-23 00:31:11.653386] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:57.157 [2024-07-23 00:31:11.653420] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:26:57.157 [2024-07-23 00:31:11.653661] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:26:57.157 [2024-07-23 00:31:11.653680] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:57.157 [2024-07-23 00:31:11.653693] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:57.157 [2024-07-23 00:31:11.653715] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.271 ms 00:26:57.157 [2024-07-23 00:31:11.653726] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:57.157 [2024-07-23 00:31:11.654069] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 1, shm_clean 1 00:26:57.157 [2024-07-23 00:31:11.654097] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:57.157 [2024-07-23 00:31:11.654125] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:26:57.157 [2024-07-23 00:31:11.654137] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:26:57.157 [2024-07-23 00:31:11.654156] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:57.157 [2024-07-23 00:31:11.654203] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:57.157 [2024-07-23 00:31:11.654214] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:26:57.157 [2024-07-23 00:31:11.654224] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:26:57.157 [2024-07-23 00:31:11.654240] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:57.157 [2024-07-23 00:31:11.654613] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:57.157 [2024-07-23 00:31:11.654633] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:57.157 [2024-07-23 00:31:11.654643] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.327 ms 00:26:57.157 [2024-07-23 00:31:11.654656] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:57.157 [2024-07-23 00:31:11.654739] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:57.157 [2024-07-23 00:31:11.654751] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:57.157 [2024-07-23 00:31:11.654761] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:26:57.157 [2024-07-23 00:31:11.654777] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:57.157 [2024-07-23 00:31:11.654806] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:57.157 [2024-07-23 00:31:11.654817] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:26:57.157 [2024-07-23 00:31:11.654828] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:26:57.157 [2024-07-23 00:31:11.654840] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:57.157 [2024-07-23 00:31:11.654863] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:26:57.157 [2024-07-23 00:31:11.656616] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:57.157 [2024-07-23 00:31:11.656639] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:57.157 [2024-07-23 00:31:11.656650] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.760 ms 00:26:57.157 [2024-07-23 00:31:11.656660] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:57.157 [2024-07-23 00:31:11.656699] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:57.157 [2024-07-23 00:31:11.656709] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:26:57.157 [2024-07-23 00:31:11.656719] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:26:57.157 [2024-07-23 00:31:11.656729] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:57.157 [2024-07-23 00:31:11.656760] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:26:57.157 [2024-07-23 00:31:11.656781] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:26:57.157 [2024-07-23 00:31:11.656822] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:26:57.157 [2024-07-23 00:31:11.656840] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:26:57.157 [2024-07-23 00:31:11.656919] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:26:57.157 [2024-07-23 00:31:11.656932] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:26:57.157 [2024-07-23 00:31:11.656944] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:26:57.157 [2024-07-23 00:31:11.656969] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:26:57.157 [2024-07-23 00:31:11.656981] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:26:57.157 [2024-07-23 00:31:11.656991] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:26:57.158 [2024-07-23 00:31:11.657000] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:26:57.158 [2024-07-23 00:31:11.657010] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:26:57.158 [2024-07-23 00:31:11.657019] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:26:57.158 [2024-07-23 00:31:11.657028] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:57.158 [2024-07-23 00:31:11.657038] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:26:57.158 [2024-07-23 00:31:11.657047] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.271 ms 00:26:57.158 [2024-07-23 00:31:11.657056] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:57.158 [2024-07-23 00:31:11.657124] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:57.158 [2024-07-23 00:31:11.657137] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:26:57.158 [2024-07-23 00:31:11.657146] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:26:57.158 [2024-07-23 00:31:11.657156] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:57.158 [2024-07-23 00:31:11.657248] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:26:57.158 [2024-07-23 00:31:11.657272] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:26:57.158 [2024-07-23 00:31:11.657284] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:57.158 [2024-07-23 00:31:11.657294] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:57.158 [2024-07-23 00:31:11.657305] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:26:57.158 [2024-07-23 00:31:11.657314] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:26:57.158 [2024-07-23 00:31:11.657324] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:26:57.158 [2024-07-23 00:31:11.657333] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:26:57.158 [2024-07-23 00:31:11.657342] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:26:57.158 [2024-07-23 00:31:11.657351] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:57.158 [2024-07-23 00:31:11.657360] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:26:57.158 [2024-07-23 00:31:11.657369] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:26:57.158 [2024-07-23 00:31:11.657381] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:57.158 [2024-07-23 00:31:11.657390] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:26:57.158 [2024-07-23 00:31:11.657400] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:26:57.158 [2024-07-23 00:31:11.657410] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:57.158 [2024-07-23 00:31:11.657419] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:26:57.158 [2024-07-23 00:31:11.657428] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:26:57.158 [2024-07-23 00:31:11.657437] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:57.158 [2024-07-23 00:31:11.657446] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:26:57.158 [2024-07-23 00:31:11.657455] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:26:57.158 [2024-07-23 00:31:11.657464] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:57.158 [2024-07-23 00:31:11.657473] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:26:57.158 [2024-07-23 00:31:11.657482] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:26:57.158 [2024-07-23 00:31:11.657491] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:57.158 [2024-07-23 00:31:11.657500] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:26:57.158 [2024-07-23 00:31:11.657509] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:26:57.158 [2024-07-23 00:31:11.657517] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:57.158 [2024-07-23 00:31:11.657531] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:26:57.158 [2024-07-23 00:31:11.657540] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:26:57.158 [2024-07-23 00:31:11.657549] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:57.158 [2024-07-23 00:31:11.657557] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:26:57.158 [2024-07-23 00:31:11.657566] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:26:57.158 [2024-07-23 00:31:11.657575] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:57.158 [2024-07-23 00:31:11.657584] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:26:57.158 [2024-07-23 00:31:11.657593] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:26:57.158 [2024-07-23 00:31:11.657602] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:57.158 [2024-07-23 00:31:11.657611] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:26:57.158 [2024-07-23 00:31:11.657620] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:26:57.158 [2024-07-23 00:31:11.657629] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:57.158 [2024-07-23 00:31:11.657638] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:26:57.158 [2024-07-23 00:31:11.657647] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:26:57.158 [2024-07-23 00:31:11.657656] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:57.158 [2024-07-23 00:31:11.657664] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:26:57.158 [2024-07-23 00:31:11.657679] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:26:57.158 [2024-07-23 00:31:11.657691] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:57.158 [2024-07-23 00:31:11.657714] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:57.158 [2024-07-23 00:31:11.657723] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:26:57.158 [2024-07-23 00:31:11.657732] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:26:57.158 [2024-07-23 00:31:11.657741] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:26:57.158 [2024-07-23 00:31:11.657750] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:26:57.158 [2024-07-23 00:31:11.657759] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:26:57.158 [2024-07-23 00:31:11.657769] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:26:57.158 [2024-07-23 00:31:11.657778] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:26:57.158 [2024-07-23 00:31:11.657797] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:57.158 [2024-07-23 00:31:11.657808] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:26:57.158 [2024-07-23 00:31:11.657818] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:26:57.158 [2024-07-23 00:31:11.657828] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:26:57.158 [2024-07-23 00:31:11.657838] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:26:57.158 [2024-07-23 00:31:11.657848] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:26:57.158 [2024-07-23 00:31:11.657861] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:26:57.158 [2024-07-23 00:31:11.657871] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:26:57.158 [2024-07-23 00:31:11.657881] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:26:57.158 [2024-07-23 00:31:11.657891] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:26:57.158 [2024-07-23 00:31:11.657901] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:26:57.158 [2024-07-23 00:31:11.657910] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:26:57.158 [2024-07-23 00:31:11.657920] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:26:57.158 [2024-07-23 00:31:11.657930] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:26:57.158 [2024-07-23 00:31:11.657942] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:26:57.158 [2024-07-23 00:31:11.657952] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:26:57.158 [2024-07-23 00:31:11.657962] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:57.158 [2024-07-23 00:31:11.657972] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:26:57.158 [2024-07-23 00:31:11.657983] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:26:57.158 [2024-07-23 00:31:11.657993] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:26:57.158 [2024-07-23 00:31:11.658003] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:26:57.158 [2024-07-23 00:31:11.658021] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:57.158 [2024-07-23 00:31:11.658034] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:26:57.158 [2024-07-23 00:31:11.658050] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.824 ms 00:26:57.158 [2024-07-23 00:31:11.658059] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:57.158 [2024-07-23 00:31:11.677821] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:57.158 [2024-07-23 00:31:11.677884] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:57.158 [2024-07-23 00:31:11.677911] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.747 ms 00:26:57.158 [2024-07-23 00:31:11.677938] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:57.158 [2024-07-23 00:31:11.678087] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:57.158 [2024-07-23 00:31:11.678109] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:26:57.158 [2024-07-23 00:31:11.678129] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.100 ms 00:26:57.158 [2024-07-23 00:31:11.678147] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:57.158 [2024-07-23 00:31:11.691186] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:57.159 [2024-07-23 00:31:11.691236] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:57.159 [2024-07-23 00:31:11.691257] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.959 ms 00:26:57.159 [2024-07-23 00:31:11.691288] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:57.159 [2024-07-23 00:31:11.691336] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:57.159 [2024-07-23 00:31:11.691353] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:57.159 [2024-07-23 00:31:11.691370] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:26:57.159 [2024-07-23 00:31:11.691385] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:57.159 [2024-07-23 00:31:11.691532] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:57.159 [2024-07-23 00:31:11.691556] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:57.159 [2024-07-23 00:31:11.691586] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.075 ms 00:26:57.159 [2024-07-23 00:31:11.691601] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:57.159 [2024-07-23 00:31:11.691765] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:57.159 [2024-07-23 00:31:11.691785] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:57.159 [2024-07-23 00:31:11.691801] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.133 ms 00:26:57.159 [2024-07-23 00:31:11.691817] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:57.159 [2024-07-23 00:31:11.698468] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:57.159 [2024-07-23 00:31:11.698505] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:57.159 [2024-07-23 00:31:11.698527] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.631 ms 00:26:57.159 [2024-07-23 00:31:11.698538] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:57.159 [2024-07-23 00:31:11.698658] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:26:57.159 [2024-07-23 00:31:11.698675] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:26:57.159 [2024-07-23 00:31:11.698688] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:57.159 [2024-07-23 00:31:11.698699] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:26:57.159 [2024-07-23 00:31:11.698717] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:26:57.159 [2024-07-23 00:31:11.698727] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:57.159 [2024-07-23 00:31:11.709454] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:57.159 [2024-07-23 00:31:11.709482] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:26:57.159 [2024-07-23 00:31:11.709493] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.726 ms 00:26:57.159 [2024-07-23 00:31:11.709519] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:57.159 [2024-07-23 00:31:11.709618] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:57.159 [2024-07-23 00:31:11.709629] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:26:57.159 [2024-07-23 00:31:11.709639] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.075 ms 00:26:57.159 [2024-07-23 00:31:11.709649] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:57.159 [2024-07-23 00:31:11.709695] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:57.159 [2024-07-23 00:31:11.709711] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:26:57.159 [2024-07-23 00:31:11.709721] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.001 ms 00:26:57.159 [2024-07-23 00:31:11.709730] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:57.159 [2024-07-23 00:31:11.709996] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:57.159 [2024-07-23 00:31:11.710009] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:26:57.159 [2024-07-23 00:31:11.710019] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.229 ms 00:26:57.159 [2024-07-23 00:31:11.710028] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:57.159 [2024-07-23 00:31:11.710045] mngt/ftl_mngt_p2l.c: 132:ftl_mngt_p2l_restore_ckpt: *NOTICE*: [FTL][ftl0] SHM: skipping p2l ckpt restore 00:26:57.159 [2024-07-23 00:31:11.710056] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:57.159 [2024-07-23 00:31:11.710070] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:26:57.159 [2024-07-23 00:31:11.710080] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:26:57.159 [2024-07-23 00:31:11.710090] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:57.159 [2024-07-23 00:31:11.717416] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:26:57.159 [2024-07-23 00:31:11.717603] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:57.159 [2024-07-23 00:31:11.717622] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:26:57.159 [2024-07-23 00:31:11.717639] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.506 ms 00:26:57.159 [2024-07-23 00:31:11.717651] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:57.159 [2024-07-23 00:31:11.719806] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:57.159 [2024-07-23 00:31:11.719834] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:26:57.159 [2024-07-23 00:31:11.719845] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.136 ms 00:26:57.159 [2024-07-23 00:31:11.719855] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:57.159 [2024-07-23 00:31:11.719905] mngt/ftl_mngt_band.c: 414:ftl_mngt_finalize_init_bands: *NOTICE*: [FTL][ftl0] SHM: band open P2L map df_id 0x2400000 00:26:57.159 [2024-07-23 00:31:11.720332] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:57.159 [2024-07-23 00:31:11.720350] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:26:57.159 [2024-07-23 00:31:11.720364] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.441 ms 00:26:57.159 [2024-07-23 00:31:11.720374] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:57.159 [2024-07-23 00:31:11.720416] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:57.159 [2024-07-23 00:31:11.720427] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:26:57.159 [2024-07-23 00:31:11.720441] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:26:57.159 [2024-07-23 00:31:11.720450] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:57.159 [2024-07-23 00:31:11.720482] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:26:57.159 [2024-07-23 00:31:11.720493] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:57.159 [2024-07-23 00:31:11.720503] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:26:57.159 [2024-07-23 00:31:11.720513] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:26:57.159 [2024-07-23 00:31:11.720525] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:57.159 [2024-07-23 00:31:11.724573] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:57.159 [2024-07-23 00:31:11.724609] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:26:57.159 [2024-07-23 00:31:11.724621] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.037 ms 00:26:57.159 [2024-07-23 00:31:11.724631] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:57.159 [2024-07-23 00:31:11.724698] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:57.159 [2024-07-23 00:31:11.724715] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:26:57.159 [2024-07-23 00:31:11.724726] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:26:57.159 [2024-07-23 00:31:11.724735] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:57.159 [2024-07-23 00:31:11.727171] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 73.429 ms, result 0 00:27:34.815  Copying: 22/1024 [MB] (22 MBps) Copying: 51/1024 [MB] (28 MBps) Copying: 80/1024 [MB] (29 MBps) Copying: 109/1024 [MB] (28 MBps) Copying: 138/1024 [MB] (29 MBps) Copying: 164/1024 [MB] (26 MBps) Copying: 190/1024 [MB] (25 MBps) Copying: 216/1024 [MB] (26 MBps) Copying: 241/1024 [MB] (25 MBps) Copying: 267/1024 [MB] (25 MBps) Copying: 293/1024 [MB] (25 MBps) Copying: 318/1024 [MB] (25 MBps) Copying: 344/1024 [MB] (25 MBps) Copying: 370/1024 [MB] (26 MBps) Copying: 396/1024 [MB] (25 MBps) Copying: 422/1024 [MB] (26 MBps) Copying: 448/1024 [MB] (25 MBps) Copying: 474/1024 [MB] (26 MBps) Copying: 501/1024 [MB] (26 MBps) Copying: 527/1024 [MB] (25 MBps) Copying: 553/1024 [MB] (26 MBps) Copying: 580/1024 [MB] (26 MBps) Copying: 608/1024 [MB] (28 MBps) Copying: 636/1024 [MB] (28 MBps) Copying: 665/1024 [MB] (28 MBps) Copying: 694/1024 [MB] (29 MBps) Copying: 724/1024 [MB] (29 MBps) Copying: 753/1024 [MB] (29 MBps) Copying: 785/1024 [MB] (31 MBps) Copying: 815/1024 [MB] (30 MBps) Copying: 844/1024 [MB] (28 MBps) Copying: 873/1024 [MB] (29 MBps) Copying: 902/1024 [MB] (28 MBps) Copying: 930/1024 [MB] (28 MBps) Copying: 959/1024 [MB] (28 MBps) Copying: 988/1024 [MB] (28 MBps) Copying: 1016/1024 [MB] (28 MBps) Copying: 1024/1024 [MB] (average 27 MBps)[2024-07-23 00:31:49.305539] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:34.815 [2024-07-23 00:31:49.305610] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:27:34.815 [2024-07-23 00:31:49.305637] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:27:34.815 [2024-07-23 00:31:49.305649] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:34.815 [2024-07-23 00:31:49.305676] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:27:34.815 [2024-07-23 00:31:49.306376] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:34.815 [2024-07-23 00:31:49.306391] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:27:34.815 [2024-07-23 00:31:49.306411] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.683 ms 00:27:34.815 [2024-07-23 00:31:49.306426] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:34.815 [2024-07-23 00:31:49.306958] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:34.815 [2024-07-23 00:31:49.306986] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:27:34.815 [2024-07-23 00:31:49.307002] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.486 ms 00:27:34.815 [2024-07-23 00:31:49.307016] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:34.815 [2024-07-23 00:31:49.307055] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:34.815 [2024-07-23 00:31:49.307071] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:27:34.815 [2024-07-23 00:31:49.307086] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:27:34.815 [2024-07-23 00:31:49.307100] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:34.815 [2024-07-23 00:31:49.307172] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:34.815 [2024-07-23 00:31:49.307189] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:27:34.815 [2024-07-23 00:31:49.307204] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:27:34.815 [2024-07-23 00:31:49.307218] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:34.815 [2024-07-23 00:31:49.307239] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:27:34.815 [2024-07-23 00:31:49.307273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 133632 / 261120 wr_cnt: 1 state: open 00:27:34.816 [2024-07-23 00:31:49.307292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:27:34.816 [2024-07-23 00:31:49.307308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:27:34.816 [2024-07-23 00:31:49.307323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:27:34.816 [2024-07-23 00:31:49.307339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:27:34.816 [2024-07-23 00:31:49.307355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:27:34.816 [2024-07-23 00:31:49.307371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:27:34.816 [2024-07-23 00:31:49.307386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:27:34.816 [2024-07-23 00:31:49.307402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:27:34.816 [2024-07-23 00:31:49.307418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:27:34.816 [2024-07-23 00:31:49.307433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:27:34.816 [2024-07-23 00:31:49.307449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:27:34.816 [2024-07-23 00:31:49.307465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:27:34.816 [2024-07-23 00:31:49.307481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:27:34.816 [2024-07-23 00:31:49.307497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:27:34.816 [2024-07-23 00:31:49.307513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:27:34.816 [2024-07-23 00:31:49.307528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:27:34.816 [2024-07-23 00:31:49.307544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:27:34.816 [2024-07-23 00:31:49.307559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:27:34.816 [2024-07-23 00:31:49.307574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:27:34.816 [2024-07-23 00:31:49.307590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:27:34.816 [2024-07-23 00:31:49.307607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:27:34.816 [2024-07-23 00:31:49.307623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:27:34.816 [2024-07-23 00:31:49.307638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:27:34.816 [2024-07-23 00:31:49.307654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:27:34.816 [2024-07-23 00:31:49.307669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:27:34.816 [2024-07-23 00:31:49.307685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:27:34.816 [2024-07-23 00:31:49.307700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:27:34.816 [2024-07-23 00:31:49.307715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:27:34.816 [2024-07-23 00:31:49.307730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:27:34.816 [2024-07-23 00:31:49.307745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:27:34.816 [2024-07-23 00:31:49.307761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:27:34.816 [2024-07-23 00:31:49.307776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:27:34.816 [2024-07-23 00:31:49.307791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:27:34.816 [2024-07-23 00:31:49.307807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:27:34.816 [2024-07-23 00:31:49.307822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:27:34.816 [2024-07-23 00:31:49.307838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:27:34.816 [2024-07-23 00:31:49.307854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:27:34.816 [2024-07-23 00:31:49.307869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:27:34.816 [2024-07-23 00:31:49.307885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:27:34.816 [2024-07-23 00:31:49.307900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:27:34.816 [2024-07-23 00:31:49.307915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:27:34.816 [2024-07-23 00:31:49.307931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:27:34.816 [2024-07-23 00:31:49.307947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:27:34.816 [2024-07-23 00:31:49.307962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:27:34.816 [2024-07-23 00:31:49.307978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:27:34.816 [2024-07-23 00:31:49.307995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:27:34.816 [2024-07-23 00:31:49.308011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:27:34.816 [2024-07-23 00:31:49.308026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:27:34.816 [2024-07-23 00:31:49.308042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:27:34.816 [2024-07-23 00:31:49.308058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:27:34.816 [2024-07-23 00:31:49.308073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:27:34.816 [2024-07-23 00:31:49.308089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:27:34.816 [2024-07-23 00:31:49.308104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:27:34.816 [2024-07-23 00:31:49.308120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:27:34.816 [2024-07-23 00:31:49.308135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:27:34.816 [2024-07-23 00:31:49.308150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:27:34.816 [2024-07-23 00:31:49.308166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:27:34.816 [2024-07-23 00:31:49.308182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:27:34.816 [2024-07-23 00:31:49.308197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:27:34.816 [2024-07-23 00:31:49.308212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:27:34.816 [2024-07-23 00:31:49.308227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:27:34.816 [2024-07-23 00:31:49.308243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:27:34.816 [2024-07-23 00:31:49.308269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:27:34.816 [2024-07-23 00:31:49.308285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:27:34.816 [2024-07-23 00:31:49.308301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:27:34.816 [2024-07-23 00:31:49.308316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:27:34.816 [2024-07-23 00:31:49.308332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:27:34.816 [2024-07-23 00:31:49.308347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:27:34.816 [2024-07-23 00:31:49.308363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:27:34.816 [2024-07-23 00:31:49.308378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:27:34.816 [2024-07-23 00:31:49.308393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:27:34.816 [2024-07-23 00:31:49.308416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:27:34.816 [2024-07-23 00:31:49.308627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:27:34.816 [2024-07-23 00:31:49.308643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:27:34.816 [2024-07-23 00:31:49.308658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:27:34.816 [2024-07-23 00:31:49.308673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:27:34.816 [2024-07-23 00:31:49.308689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:27:34.816 [2024-07-23 00:31:49.308705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:27:34.816 [2024-07-23 00:31:49.308720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:27:34.816 [2024-07-23 00:31:49.308736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:27:34.816 [2024-07-23 00:31:49.308752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:27:34.816 [2024-07-23 00:31:49.308768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:27:34.816 [2024-07-23 00:31:49.308783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:27:34.816 [2024-07-23 00:31:49.308799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:27:34.816 [2024-07-23 00:31:49.308814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:27:34.816 [2024-07-23 00:31:49.308830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:27:34.816 [2024-07-23 00:31:49.308845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:27:34.816 [2024-07-23 00:31:49.308861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:27:34.817 [2024-07-23 00:31:49.308876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:27:34.817 [2024-07-23 00:31:49.308892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:27:34.817 [2024-07-23 00:31:49.308907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:27:34.817 [2024-07-23 00:31:49.308922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:27:34.817 [2024-07-23 00:31:49.308937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:27:34.817 [2024-07-23 00:31:49.308963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:27:34.817 [2024-07-23 00:31:49.308979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:27:34.817 [2024-07-23 00:31:49.308994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:27:34.817 [2024-07-23 00:31:49.309010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:27:34.817 [2024-07-23 00:31:49.309025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:27:34.817 [2024-07-23 00:31:49.309041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:27:34.817 [2024-07-23 00:31:49.309066] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:27:34.817 [2024-07-23 00:31:49.309103] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 88705295-19ad-48c6-b0e8-453394f04b93 00:27:34.817 [2024-07-23 00:31:49.309129] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 133632 00:27:34.817 [2024-07-23 00:31:49.309143] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 37920 00:27:34.817 [2024-07-23 00:31:49.309157] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 37888 00:27:34.817 [2024-07-23 00:31:49.309177] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0008 00:27:34.817 [2024-07-23 00:31:49.309318] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:27:34.817 [2024-07-23 00:31:49.309334] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:27:34.817 [2024-07-23 00:31:49.309348] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:27:34.817 [2024-07-23 00:31:49.309362] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:27:34.817 [2024-07-23 00:31:49.309377] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:27:34.817 [2024-07-23 00:31:49.309392] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:34.817 [2024-07-23 00:31:49.309407] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:27:34.817 [2024-07-23 00:31:49.309422] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.157 ms 00:27:34.817 [2024-07-23 00:31:49.309436] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:34.817 [2024-07-23 00:31:49.311422] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:34.817 [2024-07-23 00:31:49.311461] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:27:34.817 [2024-07-23 00:31:49.311478] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.963 ms 00:27:34.817 [2024-07-23 00:31:49.311492] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:34.817 [2024-07-23 00:31:49.311623] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:34.817 [2024-07-23 00:31:49.311642] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:27:34.817 [2024-07-23 00:31:49.311657] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.104 ms 00:27:34.817 [2024-07-23 00:31:49.311682] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:34.817 [2024-07-23 00:31:49.318450] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:34.817 [2024-07-23 00:31:49.318478] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:34.817 [2024-07-23 00:31:49.318499] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:34.817 [2024-07-23 00:31:49.318509] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:34.817 [2024-07-23 00:31:49.318559] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:34.817 [2024-07-23 00:31:49.318570] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:34.817 [2024-07-23 00:31:49.318580] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:34.817 [2024-07-23 00:31:49.318590] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:34.817 [2024-07-23 00:31:49.318651] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:34.817 [2024-07-23 00:31:49.318668] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:34.817 [2024-07-23 00:31:49.318685] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:34.817 [2024-07-23 00:31:49.318695] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:34.817 [2024-07-23 00:31:49.318711] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:34.817 [2024-07-23 00:31:49.318721] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:34.817 [2024-07-23 00:31:49.318731] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:34.817 [2024-07-23 00:31:49.318741] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:34.817 [2024-07-23 00:31:49.330855] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:34.817 [2024-07-23 00:31:49.330899] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:34.817 [2024-07-23 00:31:49.330928] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:34.817 [2024-07-23 00:31:49.330939] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:34.817 [2024-07-23 00:31:49.340399] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:34.817 [2024-07-23 00:31:49.340439] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:34.817 [2024-07-23 00:31:49.340452] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:34.817 [2024-07-23 00:31:49.340462] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:34.817 [2024-07-23 00:31:49.340513] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:34.817 [2024-07-23 00:31:49.340542] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:34.817 [2024-07-23 00:31:49.340557] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:34.817 [2024-07-23 00:31:49.340567] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:34.817 [2024-07-23 00:31:49.340594] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:34.817 [2024-07-23 00:31:49.340605] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:34.817 [2024-07-23 00:31:49.340615] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:34.817 [2024-07-23 00:31:49.340625] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:34.817 [2024-07-23 00:31:49.340679] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:34.817 [2024-07-23 00:31:49.340692] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:34.817 [2024-07-23 00:31:49.340702] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:34.817 [2024-07-23 00:31:49.340715] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:34.817 [2024-07-23 00:31:49.340746] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:34.817 [2024-07-23 00:31:49.340758] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:27:34.817 [2024-07-23 00:31:49.340768] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:34.817 [2024-07-23 00:31:49.340777] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:34.817 [2024-07-23 00:31:49.340815] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:34.817 [2024-07-23 00:31:49.340827] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:34.817 [2024-07-23 00:31:49.340837] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:34.817 [2024-07-23 00:31:49.340850] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:34.817 [2024-07-23 00:31:49.340898] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:34.817 [2024-07-23 00:31:49.340910] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:34.817 [2024-07-23 00:31:49.340921] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:34.817 [2024-07-23 00:31:49.340931] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:34.817 [2024-07-23 00:31:49.341083] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 35.561 ms, result 0 00:27:35.076 00:27:35.076 00:27:35.076 00:31:49 ftl.ftl_restore_fast -- ftl/restore.sh@82 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:27:36.983 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:27:36.983 00:31:51 ftl.ftl_restore_fast -- ftl/restore.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:27:36.983 00:31:51 ftl.ftl_restore_fast -- ftl/restore.sh@85 -- # restore_kill 00:27:36.983 00:31:51 ftl.ftl_restore_fast -- ftl/restore.sh@28 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:27:36.983 00:31:51 ftl.ftl_restore_fast -- ftl/restore.sh@29 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:27:36.983 00:31:51 ftl.ftl_restore_fast -- ftl/restore.sh@30 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:27:36.983 00:31:51 ftl.ftl_restore_fast -- ftl/restore.sh@32 -- # killprocess 94383 00:27:36.983 00:31:51 ftl.ftl_restore_fast -- common/autotest_common.sh@946 -- # '[' -z 94383 ']' 00:27:36.983 00:31:51 ftl.ftl_restore_fast -- common/autotest_common.sh@950 -- # kill -0 94383 00:27:36.983 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 950: kill: (94383) - No such process 00:27:36.983 00:31:51 ftl.ftl_restore_fast -- common/autotest_common.sh@973 -- # echo 'Process with pid 94383 is not found' 00:27:36.983 Process with pid 94383 is not found 00:27:36.983 00:31:51 ftl.ftl_restore_fast -- ftl/restore.sh@33 -- # remove_shm 00:27:36.983 Remove shared memory files 00:27:36.983 00:31:51 ftl.ftl_restore_fast -- ftl/common.sh@204 -- # echo Remove shared memory files 00:27:36.983 00:31:51 ftl.ftl_restore_fast -- ftl/common.sh@205 -- # rm -f rm -f 00:27:36.983 00:31:51 ftl.ftl_restore_fast -- ftl/common.sh@206 -- # rm -f rm -f /dev/hugepages/ftl_88705295-19ad-48c6-b0e8-453394f04b93_band_md /dev/hugepages/ftl_88705295-19ad-48c6-b0e8-453394f04b93_l2p_l1 /dev/hugepages/ftl_88705295-19ad-48c6-b0e8-453394f04b93_l2p_l2 /dev/hugepages/ftl_88705295-19ad-48c6-b0e8-453394f04b93_l2p_l2_ctx /dev/hugepages/ftl_88705295-19ad-48c6-b0e8-453394f04b93_nvc_md /dev/hugepages/ftl_88705295-19ad-48c6-b0e8-453394f04b93_p2l_pool /dev/hugepages/ftl_88705295-19ad-48c6-b0e8-453394f04b93_sb /dev/hugepages/ftl_88705295-19ad-48c6-b0e8-453394f04b93_sb_shm /dev/hugepages/ftl_88705295-19ad-48c6-b0e8-453394f04b93_trim_bitmap /dev/hugepages/ftl_88705295-19ad-48c6-b0e8-453394f04b93_trim_log /dev/hugepages/ftl_88705295-19ad-48c6-b0e8-453394f04b93_trim_md /dev/hugepages/ftl_88705295-19ad-48c6-b0e8-453394f04b93_vmap 00:27:36.983 00:31:51 ftl.ftl_restore_fast -- ftl/common.sh@207 -- # rm -f rm -f 00:27:36.983 00:31:51 ftl.ftl_restore_fast -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:27:36.983 00:31:51 ftl.ftl_restore_fast -- ftl/common.sh@209 -- # rm -f rm -f 00:27:36.983 ************************************ 00:27:36.983 END TEST ftl_restore_fast 00:27:36.983 ************************************ 00:27:36.983 00:27:36.983 real 3m1.165s 00:27:36.983 user 2m50.398s 00:27:36.983 sys 0m12.247s 00:27:36.983 00:31:51 ftl.ftl_restore_fast -- common/autotest_common.sh@1122 -- # xtrace_disable 00:27:36.983 00:31:51 ftl.ftl_restore_fast -- common/autotest_common.sh@10 -- # set +x 00:27:36.983 Process with pid 87480 is not found 00:27:36.983 00:31:51 ftl -- ftl/ftl.sh@1 -- # at_ftl_exit 00:27:36.983 00:31:51 ftl -- ftl/ftl.sh@14 -- # killprocess 87480 00:27:36.983 00:31:51 ftl -- common/autotest_common.sh@946 -- # '[' -z 87480 ']' 00:27:36.983 00:31:51 ftl -- common/autotest_common.sh@950 -- # kill -0 87480 00:27:36.983 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 950: kill: (87480) - No such process 00:27:36.983 00:31:51 ftl -- common/autotest_common.sh@973 -- # echo 'Process with pid 87480 is not found' 00:27:36.983 00:31:51 ftl -- ftl/ftl.sh@17 -- # [[ -n 0000:00:11.0 ]] 00:27:36.983 00:31:51 ftl -- ftl/ftl.sh@19 -- # spdk_tgt_pid=96263 00:27:36.983 00:31:51 ftl -- ftl/ftl.sh@20 -- # waitforlisten 96263 00:27:36.983 00:31:51 ftl -- ftl/ftl.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:27:36.983 00:31:51 ftl -- common/autotest_common.sh@827 -- # '[' -z 96263 ']' 00:27:36.983 00:31:51 ftl -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:36.983 00:31:51 ftl -- common/autotest_common.sh@832 -- # local max_retries=100 00:27:36.983 00:31:51 ftl -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:36.983 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:36.983 00:31:51 ftl -- common/autotest_common.sh@836 -- # xtrace_disable 00:27:36.983 00:31:51 ftl -- common/autotest_common.sh@10 -- # set +x 00:27:36.983 [2024-07-23 00:31:51.628113] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:27:36.983 [2024-07-23 00:31:51.628424] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96263 ] 00:27:37.243 [2024-07-23 00:31:51.779387] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:37.243 [2024-07-23 00:31:51.821567] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:37.811 00:31:52 ftl -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:27:37.811 00:31:52 ftl -- common/autotest_common.sh@860 -- # return 0 00:27:37.811 00:31:52 ftl -- ftl/ftl.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:27:38.070 nvme0n1 00:27:38.070 00:31:52 ftl -- ftl/ftl.sh@22 -- # clear_lvols 00:27:38.070 00:31:52 ftl -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:27:38.070 00:31:52 ftl -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:27:38.329 00:31:52 ftl -- ftl/common.sh@28 -- # stores=0cf7e7c0-3e50-4cea-96c5-cae9c664ab97 00:27:38.329 00:31:52 ftl -- ftl/common.sh@29 -- # for lvs in $stores 00:27:38.329 00:31:52 ftl -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 0cf7e7c0-3e50-4cea-96c5-cae9c664ab97 00:27:38.588 00:31:53 ftl -- ftl/ftl.sh@23 -- # killprocess 96263 00:27:38.588 00:31:53 ftl -- common/autotest_common.sh@946 -- # '[' -z 96263 ']' 00:27:38.588 00:31:53 ftl -- common/autotest_common.sh@950 -- # kill -0 96263 00:27:38.588 00:31:53 ftl -- common/autotest_common.sh@951 -- # uname 00:27:38.588 00:31:53 ftl -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:27:38.588 00:31:53 ftl -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 96263 00:27:38.588 killing process with pid 96263 00:27:38.588 00:31:53 ftl -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:27:38.588 00:31:53 ftl -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:27:38.588 00:31:53 ftl -- common/autotest_common.sh@964 -- # echo 'killing process with pid 96263' 00:27:38.588 00:31:53 ftl -- common/autotest_common.sh@965 -- # kill 96263 00:27:38.588 00:31:53 ftl -- common/autotest_common.sh@970 -- # wait 96263 00:27:38.847 00:31:53 ftl -- ftl/ftl.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:27:39.105 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:27:39.365 Waiting for block devices as requested 00:27:39.365 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:27:39.365 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:27:39.632 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:27:39.632 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:27:44.914 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:27:44.914 Remove shared memory files 00:27:44.914 00:31:59 ftl -- ftl/ftl.sh@28 -- # remove_shm 00:27:44.914 00:31:59 ftl -- ftl/common.sh@204 -- # echo Remove shared memory files 00:27:44.914 00:31:59 ftl -- ftl/common.sh@205 -- # rm -f rm -f 00:27:44.914 00:31:59 ftl -- ftl/common.sh@206 -- # rm -f rm -f 00:27:44.914 00:31:59 ftl -- ftl/common.sh@207 -- # rm -f rm -f 00:27:44.914 00:31:59 ftl -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:27:44.914 00:31:59 ftl -- ftl/common.sh@209 -- # rm -f rm -f 00:27:44.914 ************************************ 00:27:44.914 END TEST ftl 00:27:44.914 ************************************ 00:27:44.914 00:27:44.914 real 12m50.346s 00:27:44.914 user 14m49.036s 00:27:44.914 sys 1m33.359s 00:27:44.914 00:31:59 ftl -- common/autotest_common.sh@1122 -- # xtrace_disable 00:27:44.914 00:31:59 ftl -- common/autotest_common.sh@10 -- # set +x 00:27:44.914 00:31:59 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:27:44.914 00:31:59 -- spdk/autotest.sh@347 -- # '[' 0 -eq 1 ']' 00:27:44.914 00:31:59 -- spdk/autotest.sh@352 -- # '[' 0 -eq 1 ']' 00:27:44.914 00:31:59 -- spdk/autotest.sh@356 -- # '[' 0 -eq 1 ']' 00:27:44.914 00:31:59 -- spdk/autotest.sh@363 -- # [[ 0 -eq 1 ]] 00:27:44.914 00:31:59 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:27:44.914 00:31:59 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:27:44.914 00:31:59 -- spdk/autotest.sh@375 -- # [[ 0 -eq 1 ]] 00:27:44.914 00:31:59 -- spdk/autotest.sh@380 -- # trap - SIGINT SIGTERM EXIT 00:27:44.914 00:31:59 -- spdk/autotest.sh@382 -- # timing_enter post_cleanup 00:27:44.914 00:31:59 -- common/autotest_common.sh@720 -- # xtrace_disable 00:27:44.914 00:31:59 -- common/autotest_common.sh@10 -- # set +x 00:27:44.914 00:31:59 -- spdk/autotest.sh@383 -- # autotest_cleanup 00:27:44.914 00:31:59 -- common/autotest_common.sh@1388 -- # local autotest_es=0 00:27:44.914 00:31:59 -- common/autotest_common.sh@1389 -- # xtrace_disable 00:27:44.914 00:31:59 -- common/autotest_common.sh@10 -- # set +x 00:27:46.819 INFO: APP EXITING 00:27:46.819 INFO: killing all VMs 00:27:46.819 INFO: killing vhost app 00:27:46.819 INFO: EXIT DONE 00:27:47.078 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:27:47.646 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:27:47.646 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:27:47.646 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:27:47.646 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:27:48.213 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:27:48.472 Cleaning 00:27:48.472 Removing: /var/run/dpdk/spdk0/config 00:27:48.472 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:27:48.731 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:27:48.731 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:27:48.731 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:27:48.731 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:27:48.731 Removing: /var/run/dpdk/spdk0/hugepage_info 00:27:48.731 Removing: /var/run/dpdk/spdk0 00:27:48.731 Removing: /var/run/dpdk/spdk_pid73690 00:27:48.731 Removing: /var/run/dpdk/spdk_pid73846 00:27:48.731 Removing: /var/run/dpdk/spdk_pid74045 00:27:48.731 Removing: /var/run/dpdk/spdk_pid74127 00:27:48.731 Removing: /var/run/dpdk/spdk_pid74156 00:27:48.731 Removing: /var/run/dpdk/spdk_pid74267 00:27:48.731 Removing: /var/run/dpdk/spdk_pid74285 00:27:48.731 Removing: /var/run/dpdk/spdk_pid74438 00:27:48.731 Removing: /var/run/dpdk/spdk_pid74509 00:27:48.731 Removing: /var/run/dpdk/spdk_pid74581 00:27:48.731 Removing: /var/run/dpdk/spdk_pid74673 00:27:48.731 Removing: /var/run/dpdk/spdk_pid74745 00:27:48.731 Removing: /var/run/dpdk/spdk_pid74781 00:27:48.731 Removing: /var/run/dpdk/spdk_pid74818 00:27:48.731 Removing: /var/run/dpdk/spdk_pid74880 00:27:48.731 Removing: /var/run/dpdk/spdk_pid74981 00:27:48.731 Removing: /var/run/dpdk/spdk_pid75403 00:27:48.731 Removing: /var/run/dpdk/spdk_pid75451 00:27:48.731 Removing: /var/run/dpdk/spdk_pid75497 00:27:48.731 Removing: /var/run/dpdk/spdk_pid75513 00:27:48.731 Removing: /var/run/dpdk/spdk_pid75582 00:27:48.731 Removing: /var/run/dpdk/spdk_pid75593 00:27:48.731 Removing: /var/run/dpdk/spdk_pid75662 00:27:48.731 Removing: /var/run/dpdk/spdk_pid75678 00:27:48.731 Removing: /var/run/dpdk/spdk_pid75724 00:27:48.731 Removing: /var/run/dpdk/spdk_pid75738 00:27:48.731 Removing: /var/run/dpdk/spdk_pid75784 00:27:48.731 Removing: /var/run/dpdk/spdk_pid75798 00:27:48.731 Removing: /var/run/dpdk/spdk_pid75922 00:27:48.731 Removing: /var/run/dpdk/spdk_pid75959 00:27:48.731 Removing: /var/run/dpdk/spdk_pid76029 00:27:48.731 Removing: /var/run/dpdk/spdk_pid76088 00:27:48.731 Removing: /var/run/dpdk/spdk_pid76108 00:27:48.731 Removing: /var/run/dpdk/spdk_pid76175 00:27:48.731 Removing: /var/run/dpdk/spdk_pid76209 00:27:48.731 Removing: /var/run/dpdk/spdk_pid76246 00:27:48.731 Removing: /var/run/dpdk/spdk_pid76276 00:27:48.731 Removing: /var/run/dpdk/spdk_pid76317 00:27:48.731 Removing: /var/run/dpdk/spdk_pid76347 00:27:48.731 Removing: /var/run/dpdk/spdk_pid76388 00:27:48.731 Removing: /var/run/dpdk/spdk_pid76418 00:27:48.731 Removing: /var/run/dpdk/spdk_pid76454 00:27:48.731 Removing: /var/run/dpdk/spdk_pid76489 00:27:48.731 Removing: /var/run/dpdk/spdk_pid76525 00:27:48.731 Removing: /var/run/dpdk/spdk_pid76560 00:27:48.731 Removing: /var/run/dpdk/spdk_pid76596 00:27:48.731 Removing: /var/run/dpdk/spdk_pid76631 00:27:48.731 Removing: /var/run/dpdk/spdk_pid76667 00:27:48.731 Removing: /var/run/dpdk/spdk_pid76701 00:27:48.731 Removing: /var/run/dpdk/spdk_pid76738 00:27:48.731 Removing: /var/run/dpdk/spdk_pid76771 00:27:48.731 Removing: /var/run/dpdk/spdk_pid76815 00:27:48.731 Removing: /var/run/dpdk/spdk_pid76845 00:27:48.731 Removing: /var/run/dpdk/spdk_pid76887 00:27:48.990 Removing: /var/run/dpdk/spdk_pid76958 00:27:48.990 Removing: /var/run/dpdk/spdk_pid77041 00:27:48.990 Removing: /var/run/dpdk/spdk_pid77191 00:27:48.990 Removing: /var/run/dpdk/spdk_pid77259 00:27:48.990 Removing: /var/run/dpdk/spdk_pid77290 00:27:48.990 Removing: /var/run/dpdk/spdk_pid77707 00:27:48.990 Removing: /var/run/dpdk/spdk_pid77794 00:27:48.990 Removing: /var/run/dpdk/spdk_pid77892 00:27:48.990 Removing: /var/run/dpdk/spdk_pid77933 00:27:48.990 Removing: /var/run/dpdk/spdk_pid77954 00:27:48.990 Removing: /var/run/dpdk/spdk_pid78030 00:27:48.990 Removing: /var/run/dpdk/spdk_pid78644 00:27:48.990 Removing: /var/run/dpdk/spdk_pid78675 00:27:48.990 Removing: /var/run/dpdk/spdk_pid79126 00:27:48.990 Removing: /var/run/dpdk/spdk_pid79212 00:27:48.990 Removing: /var/run/dpdk/spdk_pid79306 00:27:48.990 Removing: /var/run/dpdk/spdk_pid79348 00:27:48.990 Removing: /var/run/dpdk/spdk_pid79373 00:27:48.990 Removing: /var/run/dpdk/spdk_pid79399 00:27:48.990 Removing: /var/run/dpdk/spdk_pid81227 00:27:48.990 Removing: /var/run/dpdk/spdk_pid81348 00:27:48.990 Removing: /var/run/dpdk/spdk_pid81357 00:27:48.990 Removing: /var/run/dpdk/spdk_pid81369 00:27:48.990 Removing: /var/run/dpdk/spdk_pid81416 00:27:48.990 Removing: /var/run/dpdk/spdk_pid81420 00:27:48.990 Removing: /var/run/dpdk/spdk_pid81432 00:27:48.990 Removing: /var/run/dpdk/spdk_pid81477 00:27:48.990 Removing: /var/run/dpdk/spdk_pid81481 00:27:48.990 Removing: /var/run/dpdk/spdk_pid81493 00:27:48.990 Removing: /var/run/dpdk/spdk_pid81538 00:27:48.990 Removing: /var/run/dpdk/spdk_pid81542 00:27:48.990 Removing: /var/run/dpdk/spdk_pid81554 00:27:48.990 Removing: /var/run/dpdk/spdk_pid82911 00:27:48.990 Removing: /var/run/dpdk/spdk_pid82989 00:27:48.990 Removing: /var/run/dpdk/spdk_pid83878 00:27:48.990 Removing: /var/run/dpdk/spdk_pid84233 00:27:48.990 Removing: /var/run/dpdk/spdk_pid84298 00:27:48.990 Removing: /var/run/dpdk/spdk_pid84359 00:27:48.990 Removing: /var/run/dpdk/spdk_pid84422 00:27:48.990 Removing: /var/run/dpdk/spdk_pid84500 00:27:48.990 Removing: /var/run/dpdk/spdk_pid84569 00:27:48.990 Removing: /var/run/dpdk/spdk_pid84692 00:27:48.990 Removing: /var/run/dpdk/spdk_pid84962 00:27:48.990 Removing: /var/run/dpdk/spdk_pid84987 00:27:48.990 Removing: /var/run/dpdk/spdk_pid85421 00:27:48.990 Removing: /var/run/dpdk/spdk_pid85594 00:27:48.990 Removing: /var/run/dpdk/spdk_pid85682 00:27:48.990 Removing: /var/run/dpdk/spdk_pid85775 00:27:48.990 Removing: /var/run/dpdk/spdk_pid85817 00:27:48.990 Removing: /var/run/dpdk/spdk_pid85837 00:27:48.990 Removing: /var/run/dpdk/spdk_pid86129 00:27:48.990 Removing: /var/run/dpdk/spdk_pid86166 00:27:48.990 Removing: /var/run/dpdk/spdk_pid86212 00:27:48.990 Removing: /var/run/dpdk/spdk_pid86556 00:27:48.990 Removing: /var/run/dpdk/spdk_pid86696 00:27:48.990 Removing: /var/run/dpdk/spdk_pid87480 00:27:48.990 Removing: /var/run/dpdk/spdk_pid87583 00:27:48.990 Removing: /var/run/dpdk/spdk_pid87736 00:27:48.990 Removing: /var/run/dpdk/spdk_pid87829 00:27:48.990 Removing: /var/run/dpdk/spdk_pid88146 00:27:48.990 Removing: /var/run/dpdk/spdk_pid88383 00:27:49.249 Removing: /var/run/dpdk/spdk_pid88724 00:27:49.249 Removing: /var/run/dpdk/spdk_pid88902 00:27:49.249 Removing: /var/run/dpdk/spdk_pid89025 00:27:49.249 Removing: /var/run/dpdk/spdk_pid89061 00:27:49.249 Removing: /var/run/dpdk/spdk_pid89177 00:27:49.249 Removing: /var/run/dpdk/spdk_pid89191 00:27:49.249 Removing: /var/run/dpdk/spdk_pid89227 00:27:49.249 Removing: /var/run/dpdk/spdk_pid89417 00:27:49.249 Removing: /var/run/dpdk/spdk_pid89619 00:27:49.249 Removing: /var/run/dpdk/spdk_pid90031 00:27:49.249 Removing: /var/run/dpdk/spdk_pid90451 00:27:49.249 Removing: /var/run/dpdk/spdk_pid90875 00:27:49.249 Removing: /var/run/dpdk/spdk_pid91356 00:27:49.249 Removing: /var/run/dpdk/spdk_pid91487 00:27:49.249 Removing: /var/run/dpdk/spdk_pid91563 00:27:49.249 Removing: /var/run/dpdk/spdk_pid92172 00:27:49.249 Removing: /var/run/dpdk/spdk_pid92238 00:27:49.249 Removing: /var/run/dpdk/spdk_pid92685 00:27:49.249 Removing: /var/run/dpdk/spdk_pid93032 00:27:49.249 Removing: /var/run/dpdk/spdk_pid93517 00:27:49.249 Removing: /var/run/dpdk/spdk_pid93629 00:27:49.249 Removing: /var/run/dpdk/spdk_pid93660 00:27:49.249 Removing: /var/run/dpdk/spdk_pid93707 00:27:49.249 Removing: /var/run/dpdk/spdk_pid93757 00:27:49.249 Removing: /var/run/dpdk/spdk_pid93810 00:27:49.249 Removing: /var/run/dpdk/spdk_pid93967 00:27:49.249 Removing: /var/run/dpdk/spdk_pid94036 00:27:49.249 Removing: /var/run/dpdk/spdk_pid94103 00:27:49.249 Removing: /var/run/dpdk/spdk_pid94153 00:27:49.249 Removing: /var/run/dpdk/spdk_pid94182 00:27:49.249 Removing: /var/run/dpdk/spdk_pid94241 00:27:49.249 Removing: /var/run/dpdk/spdk_pid94383 00:27:49.249 Removing: /var/run/dpdk/spdk_pid94575 00:27:49.249 Removing: /var/run/dpdk/spdk_pid94993 00:27:49.249 Removing: /var/run/dpdk/spdk_pid95403 00:27:49.249 Removing: /var/run/dpdk/spdk_pid95832 00:27:49.249 Removing: /var/run/dpdk/spdk_pid96263 00:27:49.249 Clean 00:27:49.249 00:32:03 -- common/autotest_common.sh@1447 -- # return 0 00:27:49.249 00:32:03 -- spdk/autotest.sh@384 -- # timing_exit post_cleanup 00:27:49.249 00:32:03 -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:49.249 00:32:03 -- common/autotest_common.sh@10 -- # set +x 00:27:49.508 00:32:03 -- spdk/autotest.sh@386 -- # timing_exit autotest 00:27:49.508 00:32:03 -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:49.508 00:32:03 -- common/autotest_common.sh@10 -- # set +x 00:27:49.508 00:32:04 -- spdk/autotest.sh@387 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:27:49.508 00:32:04 -- spdk/autotest.sh@389 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:27:49.508 00:32:04 -- spdk/autotest.sh@389 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:27:49.508 00:32:04 -- spdk/autotest.sh@391 -- # hash lcov 00:27:49.508 00:32:04 -- spdk/autotest.sh@391 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:27:49.508 00:32:04 -- spdk/autotest.sh@393 -- # hostname 00:27:49.508 00:32:04 -- spdk/autotest.sh@393 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /home/vagrant/spdk_repo/spdk -t fedora38-cloud-1716830599-074-updated-1705279005 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:27:49.779 geninfo: WARNING: invalid characters removed from testname! 00:28:16.343 00:32:27 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:28:16.343 00:32:30 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:28:18.246 00:32:32 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:28:20.170 00:32:34 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:28:22.712 00:32:36 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:28:24.615 00:32:38 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:28:26.518 00:32:40 -- spdk/autotest.sh@400 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:28:26.518 00:32:41 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:28:26.518 00:32:41 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:28:26.518 00:32:41 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:26.518 00:32:41 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:26.518 00:32:41 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:26.518 00:32:41 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:26.518 00:32:41 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:26.518 00:32:41 -- paths/export.sh@5 -- $ export PATH 00:28:26.518 00:32:41 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:26.518 00:32:41 -- common/autobuild_common.sh@436 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:28:26.518 00:32:41 -- common/autobuild_common.sh@437 -- $ date +%s 00:28:26.518 00:32:41 -- common/autobuild_common.sh@437 -- $ mktemp -dt spdk_1721694761.XXXXXX 00:28:26.518 00:32:41 -- common/autobuild_common.sh@437 -- $ SPDK_WORKSPACE=/tmp/spdk_1721694761.9PJXl0 00:28:26.518 00:32:41 -- common/autobuild_common.sh@439 -- $ [[ -n '' ]] 00:28:26.518 00:32:41 -- common/autobuild_common.sh@443 -- $ '[' -n v22.11.4 ']' 00:28:26.518 00:32:41 -- common/autobuild_common.sh@444 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:28:26.518 00:32:41 -- common/autobuild_common.sh@444 -- $ scanbuild_exclude=' --exclude /home/vagrant/spdk_repo/dpdk' 00:28:26.518 00:32:41 -- common/autobuild_common.sh@450 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:28:26.518 00:32:41 -- common/autobuild_common.sh@452 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/dpdk --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:28:26.518 00:32:41 -- common/autobuild_common.sh@453 -- $ get_config_params 00:28:26.518 00:32:41 -- common/autotest_common.sh@395 -- $ xtrace_disable 00:28:26.518 00:32:41 -- common/autotest_common.sh@10 -- $ set +x 00:28:26.518 00:32:41 -- common/autobuild_common.sh@453 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --with-xnvme' 00:28:26.518 00:32:41 -- common/autobuild_common.sh@455 -- $ start_monitor_resources 00:28:26.518 00:32:41 -- pm/common@17 -- $ local monitor 00:28:26.518 00:32:41 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:28:26.518 00:32:41 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:28:26.518 00:32:41 -- pm/common@25 -- $ sleep 1 00:28:26.518 00:32:41 -- pm/common@21 -- $ date +%s 00:28:26.518 00:32:41 -- pm/common@21 -- $ date +%s 00:28:26.518 00:32:41 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1721694761 00:28:26.518 00:32:41 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1721694761 00:28:26.518 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1721694761_collect-vmstat.pm.log 00:28:26.518 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1721694761_collect-cpu-load.pm.log 00:28:27.455 00:32:42 -- common/autobuild_common.sh@456 -- $ trap stop_monitor_resources EXIT 00:28:27.455 00:32:42 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j10 00:28:27.455 00:32:42 -- spdk/autopackage.sh@11 -- $ cd /home/vagrant/spdk_repo/spdk 00:28:27.455 00:32:42 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:28:27.455 00:32:42 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:28:27.455 00:32:42 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:28:27.455 00:32:42 -- spdk/autopackage.sh@19 -- $ timing_finish 00:28:27.455 00:32:42 -- common/autotest_common.sh@732 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:28:27.455 00:32:42 -- common/autotest_common.sh@733 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:28:27.455 00:32:42 -- common/autotest_common.sh@735 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:28:27.714 00:32:42 -- spdk/autopackage.sh@20 -- $ exit 0 00:28:27.714 00:32:42 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:28:27.714 00:32:42 -- pm/common@29 -- $ signal_monitor_resources TERM 00:28:27.714 00:32:42 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:28:27.714 00:32:42 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:28:27.714 00:32:42 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:28:27.714 00:32:42 -- pm/common@44 -- $ pid=97975 00:28:27.714 00:32:42 -- pm/common@50 -- $ kill -TERM 97975 00:28:27.714 00:32:42 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:28:27.714 00:32:42 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:28:27.714 00:32:42 -- pm/common@44 -- $ pid=97977 00:28:27.714 00:32:42 -- pm/common@50 -- $ kill -TERM 97977 00:28:27.714 + [[ -n 5887 ]] 00:28:27.714 + sudo kill 5887 00:28:27.724 [Pipeline] } 00:28:27.744 [Pipeline] // timeout 00:28:27.750 [Pipeline] } 00:28:27.767 [Pipeline] // stage 00:28:27.773 [Pipeline] } 00:28:27.790 [Pipeline] // catchError 00:28:27.800 [Pipeline] stage 00:28:27.803 [Pipeline] { (Stop VM) 00:28:27.817 [Pipeline] sh 00:28:28.100 + vagrant halt 00:28:31.387 ==> default: Halting domain... 00:28:37.967 [Pipeline] sh 00:28:38.248 + vagrant destroy -f 00:28:41.535 ==> default: Removing domain... 00:28:41.547 [Pipeline] sh 00:28:41.862 + mv output /var/jenkins/workspace/nvme-vg-autotest/output 00:28:41.887 [Pipeline] } 00:28:41.905 [Pipeline] // stage 00:28:41.910 [Pipeline] } 00:28:41.927 [Pipeline] // dir 00:28:41.933 [Pipeline] } 00:28:41.960 [Pipeline] // wrap 00:28:41.967 [Pipeline] } 00:28:41.982 [Pipeline] // catchError 00:28:41.992 [Pipeline] stage 00:28:41.994 [Pipeline] { (Epilogue) 00:28:42.009 [Pipeline] sh 00:28:42.293 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:28:47.583 [Pipeline] catchError 00:28:47.585 [Pipeline] { 00:28:47.600 [Pipeline] sh 00:28:47.882 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:28:48.141 Artifacts sizes are good 00:28:48.150 [Pipeline] } 00:28:48.167 [Pipeline] // catchError 00:28:48.179 [Pipeline] archiveArtifacts 00:28:48.186 Archiving artifacts 00:28:48.301 [Pipeline] cleanWs 00:28:48.313 [WS-CLEANUP] Deleting project workspace... 00:28:48.313 [WS-CLEANUP] Deferred wipeout is used... 00:28:48.319 [WS-CLEANUP] done 00:28:48.321 [Pipeline] } 00:28:48.339 [Pipeline] // stage 00:28:48.346 [Pipeline] } 00:28:48.362 [Pipeline] // node 00:28:48.368 [Pipeline] End of Pipeline 00:28:48.424 Finished: SUCCESS